WorldWideScience

Sample records for big 1-98 randomised

  1. Adjuvant letrozole versus tamoxifen according to centrally-assessed ERBB2 status for postmenopausal women with endocrine-responsive early breast cancer: supplementary results from the BIG 1-98 randomised trial

    DEFF Research Database (Denmark)

    Regan, M.M.; Lykkesfeldt, A.E.; Dell'Orto, P.

    2008-01-01

    Background The Breast International Group (BIG) 1-98 trial (a randomised double-blind phase III trial) has shown that letrozole significantly improves disease-free survival (DFS) compared with tamoxifen in postmenopausal women with endocrine-responsive early breast cancer. Our aim was to establish...... whether the benefit of letrozole versus tamoxifen differs according to the ERBB2 status of tumours. Methods The BIG 1-98 trial consists of four treatment groups that compare 5 years of monotherapy with letrozole or tamoxifen, and sequential administration of one drug for 2 years followed by the other drug...... for 3 years. Our study includes data from the 4922 patients randomly assigned to the two monotherapy treatment groups (letrozole or tamoxifen for 5 years; 51 months median follow-up [range

  2. Update of the BIG 1-98 Trial: where do we stand?

    Science.gov (United States)

    Joerger, Markus; Thürlimann, Beat

    2009-10-01

    There is accumulating data on the clinical benefit of aromatase inhibitors in the adjuvant treatment of early-stage breast cancer in postmenopausal women. The Breast International Group (BIG) 1-98 study is a randomized, phase 3, double-blind trial comparing four adjuvant endocrine treatments of 5 years duration in postmenopausal women with hormone-receptor-positive breast cancer: letrozole or tamoxifen monotherapy, sequential treatment with tamoxifen followed by letrozole, or vice versa. This article summarizes data presented at the 2009 St. Gallen early breast cancer conference: an update on the monotherapy arms of the BIG 1-98 study, and results from the sequential treatment arms. Implications for daily practice from BIG 1-98 and from other adjuvant trials will be discussed. Despite cross-over from tamoxifen to letrozole by 25% of the patients after unblinding of the tamoxifen monotherapy arm, the improvement of disease-free survival (HR 0.88, 0.78-0.99, p = 0.03) and time to distant recurrence (HR 0.85, 0.72-1.00, p = 0.05) for letrozole monotherapy as compared to tamoxifen monotherapy remained significant in the intention-to-treat (ITT) analysis. A trend for an overall survival advantage for letrozole was seen in the ITT analysis (HR 0.87, 0.75-1.02, p = 0.08). No statistically significant differences were found for the sequential treatment arms versus letrozole monotherapy, with respect to disease-free survival, time to distant recurrence or overall survival. Cumulative incidence analysis of breast cancer recurrence favors the initiation of adjuvant endocrine treatment with letrozole instead of tamoxifen, especially in patients at higher risk for early recurrence. Similarly, data suggest that patients commenced on letrozole can be switched to tamoxifen after 2 years, if required. The BIG 1-98 study update with median follow up of 76 months confirms a significant reduction in the risk of breast cancer recurrence and a trend towards improved overall survival

  3. Assessment of letrozole and tamoxifen alone and in sequence for postmenopausal women with steroid hormone receptor-positive breast cancer: the BIG 1-98 randomised clinical trial at 8·1 years median follow-up.

    Science.gov (United States)

    Regan, Meredith M; Neven, Patrick; Giobbie-Hurder, Anita; Goldhirsch, Aron; Ejlertsen, Bent; Mauriac, Louis; Forbes, John F; Smith, Ian; Láng, István; Wardley, Andrew; Rabaglio, Manuela; Price, Karen N; Gelber, Richard D; Coates, Alan S; Thürlimann, Beat

    2011-11-01

    Postmenopausal women with hormone receptor-positive early breast cancer have persistent, long-term risk of breast-cancer recurrence and death. Therefore, trials assessing endocrine therapies for this patient population need extended follow-up. We present an update of efficacy outcomes in the Breast International Group (BIG) 1-98 study at 8·1 years median follow-up. BIG 1-98 is a randomised, phase 3, double-blind trial of postmenopausal women with hormone receptor-positive early breast cancer that compares 5 years of tamoxifen or letrozole monotherapy, or sequential treatment with 2 years of one of these drugs followed by 3 years of the other. Randomisation was done with permuted blocks, and stratified according to the two-arm or four-arm randomisation option, participating institution, and chemotherapy use. Patients, investigators, data managers, and medical reviewers were masked. The primary efficacy endpoint was disease-free survival (events were invasive breast cancer relapse, second primaries [contralateral breast and non-breast], or death without previous cancer event). Secondary endpoints were overall survival, distant recurrence-free interval (DRFI), and breast cancer-free interval (BCFI). The monotherapy comparison included patients randomly assigned to tamoxifen or letrozole for 5 years. In 2005, after a significant disease-free survival benefit was reported for letrozole as compared with tamoxifen, a protocol amendment facilitated the crossover to letrozole of patients who were still receiving tamoxifen alone; Cox models and Kaplan-Meier estimates with inverse probability of censoring weighting (IPCW) are used to account for selective crossover to letrozole of patients (n=619) in the tamoxifen arm. Comparison of sequential treatments to letrozole monotherapy included patients enrolled and randomly assigned to letrozole for 5 years, letrozole for 2 years followed by tamoxifen for 3 years, or tamoxifen for 2 years followed by letrozole for 3 years

  4. Outcomes of special histotypes of breast cancer after adjuvant endocrine therapy with letrozole or tamoxifen in the monotherapy cohort of the BIG 1-98 trial

    DEFF Research Database (Denmark)

    Munzone, E; Giobbie-Hurder, A; Gusterson, B A

    2015-01-01

    BACKGROUND: We investigated the outcomes of postmenopausal women with hormone receptor-positive, early breast cancer with special histotypes (mucinous, tubular, or cribriform) enrolled in the monotherapy cohort of the BIG 1-98 trial. PATIENTS AND METHODS: The intention-to-treat BIG 1-98 monothera...

  5. Cholesterol, Cholesterol-Lowering Medication Use, and Breast Cancer Outcome in the BIG 1-98 Study

    DEFF Research Database (Denmark)

    Borgquist, Signe; Giobbie-Hurder, Anita; Ahern, Thomas P

    2017-01-01

    on cholesterol levels and hypercholesterolemia per se may counteract the intended effect of aromatase inhibitors. Patients and Methods The Breast International Group (BIG) conducted a randomized, phase III, double-blind trial, BIG 1-98, which enrolled 8,010 postmenopausal women with early-stage, hormone receptor......-positive invasive breast cancer from 1998 to 2003. Systemic levels of total cholesterol and use of CLM were measured at study entry and every 6 months up to 5.5 years. Cumulative incidence functions were used to describe the initiation of CLM in the presence of competing risks. Marginal structural Cox proportional...

  6. Letrozole compared with tamoxifen for elderly patients with endocrine-responsive early breast cancer: the BIG 1-98 trial

    DEFF Research Database (Denmark)

    Crivellari, D.; Sun, Z.; Coates, A.S.

    2008-01-01

    PURPOSE: To explore potential differences in efficacy, treatment completion, and adverse events (AEs) in elderly women receiving adjuvant tamoxifen or letrozole for five years in the Breast International Group (BIG) 1-98 trial. METHODS: This report includes the 4,922 patients allocated to 5 years...... of letrozole or tamoxifen in the BIG 1-98 trial. The median follow-up was 40.4 months. Subpopulation Treatment Effect Pattern Plot (STEPP) analysis was used to examine the patterns of differences in disease-free survival and incidences of AEs according to age. In addition, three categoric age groups were...... had superior efficacy (DFS) compared with tamoxifen in all age groups. On the basis of a small number of patients older than 75 years (6%), age per se should not unduly affect the choice of adjuvant endocrine therapy Udgivelsesdato: 2008/4/20...

  7. Molecular risk assessment of BIG 1-98 participants by expression profiling using RNA from archival tissue

    International Nuclear Information System (INIS)

    Antonov, Janine; Altermatt, Hans Jörg; Aebi, Stefan; Jaggi, Rolf; Popovici, Vlad; Delorenzi, Mauro; Wirapati, Pratyaksha; Baltzer, Anna; Oberli, Andrea; Thürlimann, Beat; Giobbie-Hurder, Anita; Viale, Giuseppe

    2010-01-01

    The purpose of the work reported here is to test reliable molecular profiles using routinely processed formalin-fixed paraffin-embedded (FFPE) tissues from participants of the clinical trial BIG 1-98 with a median follow-up of 60 months. RNA from fresh frozen (FF) and FFPE tumor samples of 82 patients were used for quality control, and independent FFPE tissues of 342 postmenopausal participants of BIG 1-98 with ER-positive cancer were analyzed by measuring prospectively selected genes and computing scores representing the functions of the estrogen receptor (eight genes, ER-8), the progesterone receptor (five genes, PGR-5), Her2 (two genes, HER2-2), and proliferation (ten genes, PRO-10) by quantitative reverse transcription PCR (qRT-PCR) on TaqMan Low Density Arrays. Molecular scores were computed for each category and ER-8, PGR-5, HER2-2, and PRO-10 scores were combined into a RISK-25 score. Pearson correlation coefficients between FF- and FFPE-derived scores were at least 0.94 and high concordance was observed between molecular scores and immunohistochemical data. The HER2-2, PGR-5, PRO-10 and RISK-25 scores were significant predictors of disease free-survival (DFS) in univariate Cox proportional hazard regression. PRO-10 and RISK-25 scores predicted DFS in patients with histological grade II breast cancer and in lymph node positive disease. The PRO-10 and PGR-5 scores were independent predictors of DFS in multivariate Cox regression models incorporating clinical risk indicators; PRO-10 outperformed Ki-67 labeling index in multivariate Cox proportional hazard analyses. Scores representing the endocrine responsiveness and proliferation status of breast cancers were developed from gene expression analyses based on RNA derived from FFPE tissues. The validation of the molecular scores with tumor samples of participants of the BIG 1-98 trial demonstrates that such scores can serve as independent prognostic factors to estimate disease free survival (DFS) in

  8. Relative Effectiveness of Letrozole Compared With Tamoxifen for Patients With Lobular Carcinoma in the BIG 1-98 Trial

    DEFF Research Database (Denmark)

    Metzger Filho, Otto; Giobbie-Hurder, Anita; Mallon, Elizabeth

    2015-01-01

    assigned onto the Breast International Group (BIG) 1-98 trial and who had centrally reviewed pathology data were included (N = 2,923). HER2-negative IDC and ILC were additionally classified as hormone receptor-positive with high (luminal B [LB] -like) or low (luminal A [LA] -like) proliferative activity......PURPOSE: To evaluate the relative effectiveness of letrozole compared with tamoxifen for patients with invasive ductal or lobular carcinoma. PATIENTS AND METHODS: Patients diagnosed with early-stage invasive ductal carcinoma (IDC) or classic invasive lobular carcinoma (ILC) who were randomly...

  9. Bone fractures among postmenopausal patients with endocrine-responsive early breast cancer treated with 5 years of letrozole or tamoxifen in the BIG 1-98 trial

    DEFF Research Database (Denmark)

    Rabaglio, M; Sun, Z; Price, K N

    2009-01-01

    of letrozole or tamoxifen in the BIG 1-98 trial who received at least some study medication (median follow-up 60.3 months). Bone fracture information (grade, cause, site) was collected every 6 months during trial treatment. RESULTS: The incidence of bone fractures was higher among patients treated......BACKGROUND: To compare the incidence and timing of bone fractures in postmenopausal women treated with 5 years of adjuvant tamoxifen or letrozole for endocrine-responsive early breast cancer in the Breast International Group (BIG) 1-98 trial. METHODS: We evaluated 4895 patients allocated to 5 years...... with letrozole [228 of 2448 women (9.3%)] versus tamoxifen [160 of 2447 women (6.5%)]. The wrist was the most common site of fracture in both treatment groups. Statistically significant risk factors for bone fractures during treatment included age, smoking history, osteoporosis at baseline, previous bone...

  10. Symptoms of endocrine treatment and outcome in the BIG 1-98 study.

    Science.gov (United States)

    Huober, J; Cole, B F; Rabaglio, M; Giobbie-Hurder, A; Wu, J; Ejlertsen, B; Bonnefoi, H; Forbes, J F; Neven, P; Láng, I; Smith, I; Wardley, A; Price, K N; Goldhirsch, A; Coates, A S; Colleoni, M; Gelber, R D; Thürlimann, B

    2014-01-01

    There may be a relationship between the incidence of vasomotor and arthralgia/myalgia symptoms and treatment outcomes for postmenopausal breast cancer patients with endocrine-responsive disease who received adjuvant letrozole or tamoxifen. Data on patients randomized into the monotherapy arms of the BIG 1-98 clinical trial who did not have either vasomotor or arthralgia/myalgia/carpal tunnel (AMC) symptoms reported at baseline, started protocol treatment and were alive and disease-free at the 3-month landmark (n = 4,798) and at the 12-month landmark (n = 4,682) were used for this report. Cohorts of patients with vasomotor symptoms, AMC symptoms, neither, or both were defined at both 3 and 12 months from randomization. Landmark analyses were performed for disease-free survival (DFS) and for breast cancer free interval (BCFI), using regression analysis to estimate hazard ratios (HR) and 95 % confidence intervals (CI). Median follow-up was 7.0 years. Reporting of AMC symptoms was associated with better outcome for both the 3- and 12-month landmark analyses [e.g., 12-month landmark, HR (95 % CI) for DFS = 0.65 (0.49-0.87), and for BCFI = 0.70 (0.49-0.99)]. By contrast, reporting of vasomotor symptoms was less clearly associated with DFS [12-month DFS HR (95 % CI) = 0.82 (0.70-0.96)] and BCFI (12-month DFS HR (95 % CI) = 0.97 (0.80-1.18). Interaction tests indicated no effect of treatment group on associations between symptoms and outcomes. While reporting of AMC symptoms was clearly associated with better DFS and BCFI, the association between vasomotor symptoms and outcome was less clear, especially with respect to breast cancer-related events.

  11. The advantage of letrozole over tamoxifen in the BIG 1-98 trial is consistent in younger postmenopausal women and in those with chemotherapy-induced menopause

    DEFF Research Database (Denmark)

    Chirgwin, Jacquie; Sun, Zhuoxin; Smith, Ian

    2012-01-01

    subclinical ovarian estrogen production), and those with chemotherapy-induced menopause who may experience return of ovarian function. In these situations tamoxifen may be preferable to an aromatase inhibitor. Among 4,922 patients allocated to the monotherapy arms (5 years of letrozole or tamoxifen......) in the BIG 1-98 trial we identified two relevant subpopulations: patients with potential residual ovarian function, defined as having natural menopause, treated without adjuvant or neoadjuvant chemotherapy and age ≤ 55 years (n = 641); and those with chemotherapy-induced menopause (n = 105). Neither...... of the subpopulations examined showed treatment effects differing from the trial population as a whole (interaction P values are 0.23 and 0.62, respectively). Indeed, both among the 641 patients aged ≤ 55 years with natural menopause and no chemotherapy (HR 0.77 [0.51, 1.16]) and among the 105 patients...

  12. p-STAT3 in luminal breast cancer: Integrated RNA-protein pooled analysis and results from the BIG 2-98 phase III trial.

    Science.gov (United States)

    Sonnenblick, Amir; Salgado, Roberto; Brohée, Sylvain; Zahavi, Tamar; Peretz, Tamar; Van den Eynden, Gert; Rouas, Ghizlane; Salmon, Asher; Francis, Prudence A; Di Leo, Angelo; Crown, John P A; Viale, Giuseppe; Daly, Laura; Javdan, Bahar; Fujisawa, Sho; De Azambuja, Evandro; Lieveke, Ameye; Piccart, Martine J; Bromberg, Jacqueline F; Sotiriou, Christos

    2018-02-01

    In the present study, in order to investigate the role of signal transducer and activator of transcription 3 (STAT3) in estrogen receptor (ER)-positive breast cancer prognosis, we evaluated the phosphorylated STAT3 (p-STAT3) status and investigated its effect on the outcome in a pooled analysis and in a large prospective adjuvant trial. By using the TCGA repository, we developed gene signatures that reflected the level of p-STAT3. Using pooled analysis of the expression data from luminal breast cancer patients, we assessed the effects of the p-STAT3 expression signature on prognosis. We further validated the p-STAT3 prognostic effect using immunohistochemistry (IHC) and immunofluorescence staining of p-STAT3 tissue microarrays from a large randomised prospective trial. Our analysis demonstrated that p-STAT3 expression was elevated in luminal A-type breast cancer (Kruskal-Wallis test, PBIG 2-98 randomised trial. With a median follow-up of 10.1 years, p-STAT3 was associated with a reduced risk of recurrence in ER-positive/HER2-negative breast cancer (Cox univariate HR, 0.66; 95% CI, 0.44-0.98; P=0.04). On the whole, our data indicate that p-STAT3 is associated with an improved outcome in ER-positive breast cancer.

  13. Is risk of central nervous system (CNS) relapse related to adjuvant taxane treatment in node-positive breast cancer? Results of the CNS substudy in the intergroup Phase III BIG 02-98 Trial

    DEFF Research Database (Denmark)

    Pestalozzi, B.C.; Francis, P.; Quinaux, E.

    2008-01-01

    BACKGROUND: Breast cancer central nervous system (CNS) metastases are an increasingly important problem because of high CNS relapse rates in patients treated with trastuzumab and/or taxanes. PATIENTS AND METHODS: We evaluated data from 2887 node-positive breast cancer patients randomised in the BIG...

  14. Interventions for treating osteoarthritis of the big toe joint.

    Science.gov (United States)

    Zammit, Gerard V; Menz, Hylton B; Munteanu, Shannon E; Landorf, Karl B; Gilheany, Mark F

    2010-09-08

    Osteoarthritis affecting of the big toe joint of the foot (hallux limitus or rigidus) is a common and painful condition. Although several treatments have been proposed, few have been adequately evaluated. To identify controlled trials evaluating interventions for osteoarthritis of the big toe joint and to determine the optimum intervention(s). Literature searches were conducted across the following electronic databases: CENTRAL; MEDLINE; EMBASE; CINAHL; and PEDro (to 14th January 2010). No language restrictions were applied. Randomised controlled trials, quasi-randomised trials, or controlled clinical trials that assessed treatment outcomes for osteoarthritis of the big toe joint. Participants of any age or gender with osteoarthritis of the big toe joint (defined either radiographically or clinically) were included. Two authors examined the list of titles and abstracts identified by the literature searches. One content area expert and one methodologist independently applied the pre-determined inclusion and exclusion criteria to the full text of identified trials. To minimise error and reduce potential bias, data were extracted independently by two content experts. Only one trial satisfactorily fulfilled the inclusion criteria and was included in this review. This trial evaluated the effectiveness of two physical therapy programs in 20 individuals with osteoarthritis of the big toe joint. Assessment outcomes included pain levels, big toe joint range of motion and plantar flexion strength of the hallux. Mean differences at four weeks follow up were 3.80 points (95% CI 2.74 to 4.86) for self reported pain, 28.30 degrees (95% CI 21.37 to 35.23) for big toe joint range of motion, and 2.80 kg (95% CI 2.13 to 3.47) for muscle strength. Although differences in outcomes between treatment and control groups were reported, the risk of bias was high. The trial failed to employ appropriate randomisation or adequate allocation concealment, used a relatively small sample and

  15. CYP19A1 polymorphisms and clinical outcomes in postmenopausal women with hormone receptor-positive breast cancer in the BIG 1-98 trial

    DEFF Research Database (Denmark)

    Leyland-Jones, Brian; Gray, Kathryn P; Abramovitz, Mark

    2015-01-01

    To determine whether CYP19A1 polymorphisms are associated with abnormal activity of aromatase and with musculoskeletal and bone side effects of aromatase inhibitors. DNA was isolated from tumor specimens of 4861 postmenopausal women with hormone receptor-positive breast cancer enrolled in the BIG 1...

  16. 49 CFR 98.1 - Purpose.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Purpose. 98.1 Section 98.1 Transportation Office of the Secretary of Transportation ENFORCEMENT OF RESTRICTIONS ON POST-EMPLOYMENT ACTIVITIES... administrative enforcement procedures that the Department of Transportation will follow when there is an...

  17. Postmastectomy Radiation Therapy in Women with T1-T2 Tumors and 1 to 3 Positive Lymph Nodes: Analysis of the Breast International Group 02-98 Trial.

    Science.gov (United States)

    Zeidan, Youssef H; Habib, Joyce G; Ameye, Lieveke; Paesmans, Marianne; de Azambuja, Evandro; Gelber, Richard D; Campbell, Ian; Nordenskjöld, Bo; Gutiérez, Jorge; Anderson, Michael; Lluch, Ana; Gnant, Michael; Goldhirsch, Aron; Di Leo, Angelo; Joseph, David J; Crown, John; Piccart-Gebhart, Martine; Francis, Prudence A

    2018-06-01

    To analyze the impact of postmastectomy radiation therapy (PMRT) for patients with T1-T2 tumors and 1 to 3 positive lymph nodes enrolled on the Breast International Group (BIG) 02-98 trial. The BIG 02-98 trial randomized patients to receive adjuvant anthracycline with or without taxane chemotherapy. Delivery of PMRT was nonrandomized and performed according to institutional preferences. The present analysis was performed on participants with T1-T2 breast cancer and 1 to 3 positive lymph nodes who had undergone mastectomy and axillary nodal dissection. The primary objective of the present study was to examine the effect of PMRT on risk of locoregional recurrence (LRR), breast cancer-specific survival, and overall survival. We identified 684 patients who met the inclusion criteria and were included in the analysis, of whom 337 (49%) had received PMRT. At 10 years, LRR risk was 2.5% in the PMRT group and 6.5% in the no-PMRT group (hazard ratio 0.29, 95% confidence interval 0.12-0.73; P = .005). Lower LRR after PMRT was noted for patients randomized to receive adjuvant chemotherapy with no taxane (10-year LRR: 3.4% vs 9.1%; P = .02). No significant differences in breast cancer-specific survival (84.3% vs 83.9%) or overall survival (81.7% vs 78.3%) were observed according to receipt of PMRT. Our analysis of the BIG 02-98 trial shows excellent outcomes in women with T1-T2 tumors and 1 to 3 positive lymph nodes found in axillary dissection. Although PMRT improved LRR in this cohort, the number of events remained low at 10 years. In all groups, 10-year rates of LRR were relatively low compared with historical studies. As such, the use of PMRT in women with 1 to 3 positive nodes should be tailored to individual patient risks. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Function of Nup98 subtypes and their fusion proteins, Nup98-TopIIβ and Nup98-SETBP1 in nuclear-cytoplasmic transport.

    Science.gov (United States)

    Saito, Shoko; Yokokawa, Takafumi; Iizuka, Gemmei; Cigdem, Sadik; Okuwaki, Mitsuru; Nagata, Kyosuke

    2017-05-20

    Nup98 is a component of the nuclear pore complex. The nup98-fusion genes derived by chromosome translocations are involved in hematopoietic malignancies. Here, we investigated the functions of Nup98 isoforms and two unexamined Nup98-fusion proteins, Nup98-TopIIβ and Nup98-SETBP1. We first demonstrated that two Nup98 isoforms are expressed in various mouse tissues and similarly localized in the nucleus and the nuclear envelope. We also showed that Nup98-TopIIβ and Nup98-SETBP1 are localized in the nucleus and partially co-localized with full-length Nup98 and a nuclear export receptor XPO1. We demonstrated that Nup98-TopIIβ and Nup98-SETBP1 negatively regulate the XPO1-mediated protein export. Our results will contribute to the understanding of the molecular mechanism by which the Nup98-fusion proteins induce tumorigenesis. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Obesity and risk of recurrence or death after adjuvant endocrine therapy with letrozole or tamoxifen in the breast international group 1-98 trial

    DEFF Research Database (Denmark)

    Ewertz, Marianne; Gray, Kathryn P; Regan, Meredith M

    2012-01-01

    To examine the association of baseline body mass index (BMI) with the risk of recurrence or death in postmenopausal women with early-stage breast cancer receiving adjuvant tamoxifen or letrozole in the Breast International Group (BIG) 1-98 trial at 8.7 years of median follow-up....

  20. Treatment Adherence and Its Impact on Disease-Free Survival in the Breast International Group 1-98 Trial of Tamoxifen and Letrozole, Alone and in Sequence

    DEFF Research Database (Denmark)

    Chirgwin, Jacquie H; Giobbie-Hurder, Anita; Coates, Alan S

    2016-01-01

    PURPOSE: To investigate adherence to endocrine treatment and its relationship with disease-free survival (DFS) in the Breast International Group (BIG) 1-98 clinical trial. METHODS: The BIG 1-98 trial is a double-blind trial that randomly assigned 6,193 postmenopausal women with hormone receptor......-positive early breast cancer in the four-arm option to 5 years of tamoxifen (Tam), letrozole (Let), or the agents in sequence (Let-Tam, Tam-Let). This analysis included 6,144 women who received at least one dose of study treatment. Conditional landmark analyses and marginal structural Cox proportional hazards......). Sequential treatments were associated with higher rates of nonpersistence (Tam-Let, 20.8%; Let-Tam, 20.3%; Tam 16.9%; Let 17.6%). Adverse events were the reason for most trial treatment early discontinuations (82.7%). Apart from sequential treatment assignment, reduced adherence was associated with older age...

  1. Big Data hvor N=1

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind

    2017-01-01

    Forskningen vedrørende anvendelsen af ’big data’ indenfor sundhed er kun lige begyndt, og kan på sigt blive en stor hjælp i forhold til at tilrettelægge en mere personlig og helhedsorienteret sundhedsindsats for multisyge. Personlig sundhedsteknologi, som kort præsenteres i dette kapital, rummer et...... stor potentiale for at gennemføre ’big data’ analyser for den enkelte person, det vil sige hvor N=1. Der er store teknologiske udfordringer i at få lavet teknologier og metoder til at indsamle og håndtere personlige data, som kan deles, på tværs på en standardiseret, forsvarlig, robust, sikker og ikke...

  2. 7 CFR 98.1 - General.

    Science.gov (United States)

    2010-01-01

    ..., READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98.1 General. Analytical services of meat and meat food products are performed for fat, moisture, salt, protein...

  3. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  4. Cophenylcaine spray vs. placebo in flexible nasendoscopy: a prospective double-blind randomised controlled trial

    NARCIS (Netherlands)

    Georgalas, C.; Sandhu, G.; Frosh, A.; Xenellis, J.

    2005-01-01

    Practices vary across the UK on the use of topical preparation prior to flexible fibreoptic nasendoscopy. In this double-blind study, we randomised 98 patients to receive cophenylcaine or placebo nasal spray before flexible nasendoscopy. A visual analogue scale (1-100) was used to record pain,

  5. Structural Insights into Arl1-Mediated Targeting of the Arf-GEF BIG1 to the trans-Golgi

    Directory of Open Access Journals (Sweden)

    Antonio Galindo

    2016-07-01

    Full Text Available The GTPase Arf1 is the major regulator of vesicle traffic at both the cis- and trans-Golgi. Arf1 is activated at the cis-Golgi by the guanine nucleotide exchange factor (GEF GBF1 and at the trans-Golgi by the related GEF BIG1 or its paralog, BIG2. The trans-Golgi-specific targeting of BIG1 and BIG2 depends on the Arf-like GTPase Arl1. We find that Arl1 binds to the dimerization and cyclophilin binding (DCB domain in BIG1 and report a crystal structure of human Arl1 bound to this domain. Residues in the DCB domain that bind Arl1 are required for BIG1 to locate to the Golgi in vivo. DCB domain-binding residues in Arl1 have a distinct conformation from those in known Arl1-effector complexes, and this plasticity allows Arl1 to interact with different effectors of unrelated structure. The findings provide structural insight into how Arf1 GEFs, and hence active Arf1, achieve their correct subcellular distribution.

  6. 37 CFR 1.98 - Content of information disclosure statement.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Content of information disclosure statement. 1.98 Section 1.98 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES National Processing...

  7. Big Bang nucleosynthesis in crisis?

    International Nuclear Information System (INIS)

    Hata, N.; Scherrer, R.J.; Steigman, G.; Thomas, D.; Walker, T.P.; Bludman, S.; Langacker, P.

    1995-01-01

    A new evaluation of the constraint on the number of light neutrino species (N ν ) from big bang nucleosynthesis suggests a discrepancy between the predicted light element abundances and those inferred from observations, unless the inferred primordial 4 He abundance has been underestimated by 0.014±0.004 (1σ) or less than 10% (95% C.L.) of 3 He survives stellar processing. With the quoted systematic errors in the observed abundances and a conservative chemical evolution parametrization, the best fit to the combined data is N ν =2.1±0.3 (1σ) and the upper limit is N ν ν =3) at the 98.6% C.L. copyright 1995 The American Physical Society

  8. ISO-PC Version 1.98: User`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Rittmann, P.D.

    1995-05-02

    This document describes how to use Version 1.98 of the shielding program named ISO-PC. Version 1.98 corrects all known errors in ISOSHLD-II. In addition, a few numeric problems have been eliminated. There are three new namelist variables, 25 additional shielding materials, and 5 more energy groups. The two major differences with the original ISOSHLD-II are the removal of RIBD(radioisotope buildup and decay) source generator, and the removal of the non-uniform source distribution parameter, SSV1. This version of ISO-PC works with photon energies from 10 KeV to 10 MeV using 30 energy groups.

  9. Influences of the Big Five personality traits on the treatment response and longitudinal course of depression in patients with acute coronary syndrome: A randomised controlled trial.

    Science.gov (United States)

    Kim, Seon-Young; Stewart, Robert; Bae, Kyung-Yeol; Kim, Sung-Wan; Shin, Il-Seon; Hong, Young Joon; Ahn, Youngkeun; Jeong, Myung Ho; Yoon, Jin-Sang; Kim, Jae-Min

    2016-10-01

    Influences of the Big Five personality traits on the treatment response and longitudinal course of depression in patients with acute coronary syndrome: A randomised controlled trial. This naturalistic observational study initially recruited 1152 ACS patients; 685 patients completed personality assessments at baseline, of whom 630 were followed-up one year later. Of the 294 patients with depression, 207 participated in a 24-week double blind trial of escitalopram or placebo. The remaining 87 patients who received medical treatment only and the 391 who had not depression were also followed in a one year naturalistic observational study. The Big five personality traits were assessed using the Big Five Inventory. The influences of personality on the Hamilton Depression Rating Scale score changes were analysed using a mixed-model repeated-measures analysis of covariance. A Cluster analysis identified two personality types: resilient and vulnerable. The vulnerable personality type was characterized by lower extraversion, agreeableness, and conscientiousness - but higher neuroticism - than the resilient type. This personality type was independently associated with a poorer outcome of depression in ACS patients during the 24-week treatment period and the one year longitudinal follow-up period compared to the resilient personality type, irrespective of treatment allocation. Recruitment from a single institution may limit generalisability. Personality traits were investigated 12-weeks after ACS; thus, the responses may have been influenced by the prior receipt of escitalopram. Personality types influences the treatment outcome and longitudinal course of depression in ACS patients independent of antidepressant treatment. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Prognostic and predictive role of ESR1 status for postmenopausal patients with endocrine-responsive early breast cancer in the Danish cohort of the BIG 1-98 trial

    DEFF Research Database (Denmark)

    Ejlertsen, B; Aldridge, J; Nielsen, K V

    2012-01-01

    postmenopausal Danish women with early breast cancer randomly assigned to receive 5 years of letrozole, tamoxifen or a sequence of these agents in the Breast International Group 1-98 trial and who had ER ≥1% after central review. RESULTS: By FISH, 13.6% of patients had an ESR1-to-Centromere-6 (CEN-6) ratio ≥2...... (amplified), and 4.2% had ESR1-to-CEN-6 ratio...

  11. Maternal note-taking and infant care: a pilot randomised controlled trial.

    Science.gov (United States)

    Kistin, Caroline J; Barrero-Castillero, Alejandra; Lewis, Sheilajane; Hoch, Rachel; Philipp, Barbara L; Bauchner, Howard; Wang, C Jason

    2012-10-01

    A pilot randomised controlled trial was conducted with postpartum mothers to assess the feasibility and impact of note-taking during newborn teaching. Controls received standard teaching; the intervention group received pen and paper to take notes. Subjects were called 2 days post-discharge to assess infant sleep position, breastfeeding, car seat use, satisfaction and information recall. 126 mothers were randomised. There was a consistent trend that intervention subjects were more likely to report infant supine sleep position (88% vs 78%, relative risks (RR) 1.13; 95% CI 0.95 to 1.34), breastfeeding (96% vs 86%, RR 1.11; 95% CI 0.99 to 1.25) and correct car seat use (98% vs 87%, RR 1.12; 95% CI 1.00 to 1.25). Satisfaction and information recall did not differ. Among first-time mothers, intervention subjects were significantly more likely to report infant supine sleep position (95% vs 65%, RR 1.46; 95% CI 1.06 to 2.00). Maternal note-taking is feasible and potentially efficacious in promoting desirable infant care.

  12. Post-licence driver education for the prevention of road traffic crashes: a systematic review of randomised controlled trials.

    Science.gov (United States)

    Ker, Katharine; Roberts, Ian; Collier, Timothy; Beyer, Fiona; Bunn, Frances; Frost, Chris

    2005-03-01

    The effectiveness of post-licence driver education for preventing road traffic crashes was quantified using a systematic review and meta-analyses of randomised controlled trials. Searches of appropriate electronic databases, the Internet and reference lists of relevant papers were conducted. The searches were not restricted by language or publication status. Data were pooled from 21 randomised controlled trials, including over 300,000 full licence-holding drivers of all ages. Nineteen trials reported subsequent traffic offences, with a pooled relative risk of 0.96 (95% confidence interval 0.94, 0.98). Fifteen trials reported traffic crashes with a pooled relative risk of 0.98 (0.96, 1.01). Four trials reported injury crashes with a pooled relative risk of 1.12 (0.88, 1.41). The results provide no evidence that post-licence driver education is effective in preventing road injuries or crashes. Although the results are compatible with a small reduction in the occurrence of traffic crashes, this may be due to selection biases or bias in the included trials.

  13. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  14. Vitamin D and risk of pregnancy related hypertensive disorders: mendelian randomisation study.

    Science.gov (United States)

    Magnus, Maria C; Miliku, Kozeta; Bauer, Anna; Engel, Stephanie M; Felix, Janine F; Jaddoe, Vincent W V; Lawlor, Debbie A; London, Stephanie J; Magnus, Per; McGinnis, Ralph; Nystad, Wenche; Page, Christian M; Rivadeneira, Fernando; Stene, Lars C; Tapia, German; Williams, Nicholas; Bonilla, Carolina; Fraser, Abigail

    2018-06-20

    To use mendelian randomisation to investigate whether 25-hydroxyvitamin D concentration has a causal effect on gestational hypertension or pre-eclampsia. One and two sample mendelian randomisation analyses. Two European pregnancy cohorts (Avon Longitudinal Study of Parents and Children, and Generation R Study), and two case-control studies (subgroup nested within the Norwegian Mother and Child Cohort Study, and the UK Genetics of Pre-eclampsia Study). 7389 women in a one sample mendelian randomisation analysis (751 with gestational hypertension and 135 with pre-eclampsia), and 3388 pre-eclampsia cases and 6059 controls in a two sample mendelian randomisation analysis. Single nucleotide polymorphisms in genes associated with vitamin D synthesis (rs10741657 and rs12785878) and metabolism (rs6013897 and rs2282679) were used as instrumental variables. Gestational hypertension and pre-eclampsia defined according to the International Society for the Study of Hypertension in Pregnancy. In the conventional multivariable analysis, the relative risk for pre-eclampsia was 1.03 (95% confidence interval 1.00 to 1.07) per 10% decrease in 25-hydroxyvitamin D level, and 2.04 (1.02 to 4.07) for 25-hydroxyvitamin D levels effect of 25-hydroxyvitamin D on the risk of gestational hypertension or pre-eclampsia: odds ratio 0.90 (95% confidence interval 0.78 to 1.03) and 1.19 (0.92 to 1.52) per 10% decrease, respectively. The two sample mendelian randomisation estimate gave an odds ratio for pre-eclampsia of 0.98 (0.89 to 1.07) per 10% decrease in 25-hydroxyvitamin D level, an odds ratio of 0.96 (0.80 to 1.15) per unit increase in the log(odds) of 25-hydroxyvitamin D level effect of vitamin D status on gestational hypertension or pre-eclampsia. Future mendelian randomisation studies with a larger number of women with pre-eclampsia or more genetic instruments that would increase the proportion of 25-hydroxyvitamin D levels explained by the instrument are needed. Published by the BMJ

  15. Tank 241-AP-106, Grab samples, 6AP-98-1, 6AP-98-2 and 6AP-98-3 Analytical results for the final report

    Energy Technology Data Exchange (ETDEWEB)

    FULLER, R.K.

    1999-02-23

    This document is the final report for tank 241-AP-106 grab samples. Three grab samples 6AP-98-1, 6AP-98-2 and 6AP-98-3 were taken from riser 1 of tank 241-AP-106 on May 28, 1998 and received by the 222-S Laboratory on May 28, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) (Sasaki, 1998) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded. The request for sample analysis received for AP-106 indicated that the samples were polychlorinated biphenyl (PCB) suspects. The results of this analysis indicated that no PCBs were present at the Toxic Substance Control Act (TSCA) regulated limit of 50 ppm. The results and raw data for the PCB analysis are included in this document.

  16. Tank 241-AP-106, Grab samples, 6AP-98-1, 6AP-98-2 and 6AP-98-3 Analytical results for the final report

    International Nuclear Information System (INIS)

    FULLER, R.K.

    1999-01-01

    This document is the final report for tank 241-AP-106 grab samples. Three grab samples 6AP-98-1, 6AP-98-2 and 6AP-98-3 were taken from riser 1 of tank 241-AP-106 on May 28, 1998 and received by the 222-S Laboratory on May 28, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) (Sasaki, 1998) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded. The request for sample analysis received for AP-106 indicated that the samples were polychlorinated biphenyl (PCB) suspects. The results of this analysis indicated that no PCBs were present at the Toxic Substance Control Act (TSCA) regulated limit of 50 ppm. The results and raw data for the PCB analysis are included in this document

  17. Dominant Suppression of β1 Integrin by Ectopic CD98-ICD Inhibits Hepatocellular Carcinoma Progression

    Directory of Open Access Journals (Sweden)

    Bo Wu

    2016-11-01

    Full Text Available Hepatocellular carcinoma (HCC is currently the third most common cause of cancer-related death in the Asia-Pacific region. Our previous work showed that knockdown of CD98 significantly inhibits malignant HCC cell phenotypes in vitro and in vivo. The level of CD98 in the membrane is tightly regulated to mediate complex processes associated with cell–cell communication and intracellular signaling. In addition, the intracellular domain of CD98 (CD98-ICD seems to be of vital importance for recycling CD98 to the membrane after it is endocytosed. The intracellular and transmembrane domains of CD98 associate with β-integrins (primarily β1 but also β3, and this association is essential for CD98 mediation of integrin-like signaling and complements dominant suppression of β1-integrin. We speculated that isolated CD98-ICD would similarly suppress β1-integrin activation and inhibit the malignant behaviors of cancer cells. In particular, the exact role of CD98-ICD has not been studied independently in HCC. In this study, we found that ectopic expression of CD98-ICD inhibited the malignant phenotypes of HCC cells, and the mechanism possibly involves β1-integrin suppression. Moreover, the expression levels of CD98, β1-integrin-A (the activated form of β1-integrin and Ki-67 were significantly increased in HCC tissues relative to those of normal liver tissues. Therefore, our preliminary study indicates that ectopic CD98-ICD has an inhibitory role in the malignant development of HCC, and shows that CD98-ICD acts as a dominant negative mutant of CD98 that attenuates β1-integrin activation. CD98-ICD may emerge as a promising candidate for antitumor treatment.

  18. Efficacy of a bivalent L1 virus-like particle vaccine in prevention of infection with human papillomavirus types 16 and 18 in young women: a randomised controlled trial.

    Science.gov (United States)

    Harper, Diane M; Franco, Eduardo L; Wheeler, Cosette; Ferris, Daron G; Jenkins, David; Schuind, Anne; Zahaf, Toufik; Innis, Bruce; Naud, Paulo; De Carvalho, Newton S; Roteli-Martins, Cecilia M; Teixeira, Julio; Blatter, Mark M; Korn, Abner P; Quint, Wim; Dubin, Gary

    Vaccination against the most common oncogenic human papillomavirus (HPV) types, HPV-16 and HPV-18, could prevent development of up to 70% of cervical cancers worldwide. We did a randomised, double-blind, controlled trial to assess the efficacy, safety, and immunogenicity of a bivalent HPV-16/18 L1 virus-like particle vaccine for the prevention of incident and persistent infection with these two virus types, associated cervical cytological abnormalities, and precancerous lesions. We randomised 1113 women between 15-25 years of age to receive three doses of either the vaccine formulated with AS04 adjuvant or placebo on a 0 month, 1 month, and 6 month schedule in North America and Brazil. Women were assessed for HPV infection by cervical cytology and self-obtained cervicovaginal samples for up to 27 months, and for vaccine safety and immunogenicity. In the according-to-protocol analyses, vaccine efficacy was 91.6% (95% CI 64.5-98.0) against incident infection and 100% against persistent infection (47.0-100) with HPV-16/18. In the intention-to-treat analyses, vaccine efficacy was 95.1% (63.5-99.3) against persistent cervical infection with HPV-16/18 and 92.9% (70.0-98.3) against cytological abnormalities associated with HPV-16/18 infection. The vaccine was generally safe, well tolerated, and highly immunogenic. The bivalent HPV vaccine was efficacious in prevention of incident and persistent cervical infections with HPV-16 and HPV-18, and associated cytological abnormalities and lesions. Vaccination against such infections could substantially reduce incidence of cervical cancer.

  19. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  20. Inhibition of CRM1-mediated nuclear export of transcription factors by leukemogenic NUP98 fusion proteins.

    Science.gov (United States)

    Takeda, Akiko; Sarma, Nayan J; Abdul-Nabi, Anmaar M; Yaseen, Nabeel R

    2010-05-21

    NUP98 is a nucleoporin that plays complex roles in the nucleocytoplasmic trafficking of macromolecules. Rearrangements of the NUP98 gene in human leukemia result in the expression of numerous fusion oncoproteins whose effect on nucleocytoplasmic trafficking is poorly understood. The present study was undertaken to determine the effects of leukemogenic NUP98 fusion proteins on CRM1-mediated nuclear export. NUP98-HOXA9, a prototypic NUP98 fusion, inhibited the nuclear export of two known CRM1 substrates: mutated cytoplasmic nucleophosmin and HIV-1 Rev. In vitro binding assays revealed that NUP98-HOXA9 binds CRM1 through the FG repeat motif in a Ran-GTP-dependent manner similar to but stronger than the interaction between CRM1 and its export substrates. Two NUP98 fusions, NUP98-HOXA9 and NUP98-DDX10, whose fusion partners are structurally and functionally unrelated, interacted with endogenous CRM1 in myeloid cells as shown by co-immunoprecipitation. These leukemogenic NUP98 fusion proteins interacted with CRM1, Ran, and the nucleoporin NUP214 in a manner fundamentally different from that of wild-type NUP98. NUP98-HOXA9 and NUP98-DDX10 formed characteristic aggregates within the nuclei of a myeloid cell line and primary human CD34+ cells and caused aberrant localization of CRM1 to these aggregates. These NUP98 fusions caused nuclear accumulation of two transcription factors, NFAT and NFkappaB, that are regulated by CRM1-mediated export. The nuclear entrapment of NFAT and NFkappaB correlated with enhanced transcription from promoters responsive to these transcription factors. Taken together, the results suggest a new mechanism by which NUP98 fusions dysregulate transcription and cause leukemia, namely, inhibition of CRM1-mediated nuclear export with aberrant nuclear retention of transcriptional regulators.

  1. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  2. Elevated levels of plasma Big endothelin-1 and its relation to hypertension and skin lesions in individuals exposed to arsenic

    International Nuclear Information System (INIS)

    Hossain, Ekhtear; Islam, Khairul; Yeasmin, Fouzia; Karim, Md. Rezaul; Rahman, Mashiur; Agarwal, Smita; Hossain, Shakhawoat; Aziz, Abdul; Al Mamun, Abdullah; Sheikh, Afzal; Haque, Abedul; Hossain, M. Tofazzal; Hossain, Mostaque; Haris, Parvez I.; Ikemura, Noriaki; Inoue, Kiyoshi; Miyataka, Hideki; Himeno, Seiichiro; Hossain, Khaled

    2012-01-01

    Chronic arsenic (As) exposure affects the endothelial system causing several diseases. Big endothelin-1 (Big ET-1), the biological precursor of endothelin-1 (ET-1) is a more accurate indicator of the degree of activation of the endothelial system. Effect of As exposure on the plasma Big ET-1 levels and its physiological implications have not yet been documented. We evaluated plasma Big ET-1 levels and their relation to hypertension and skin lesions in As exposed individuals in Bangladesh. A total of 304 study subjects from the As-endemic and non-endemic areas in Bangladesh were recruited for this study. As concentrations in water, hair and nails were measured by Inductively Coupled Plasma Mass Spectroscopy (ICP-MS). The plasma Big ET-1 levels were measured using a one-step sandwich enzyme immunoassay kit. Significant increase in Big ET-1 levels were observed with the increasing concentrations of As in drinking water, hair and nails. Further, before and after adjusting with different covariates, plasma Big ET-1 levels were found to be significantly associated with the water, hair and nail As concentrations of the study subjects. Big ET-1 levels were also higher in the higher exposure groups compared to the lowest (reference) group. Interestingly, we observed that Big ET-1 levels were significantly higher in the hypertensive and skin lesion groups compared to the normotensive and without skin lesion counterpart, respectively of the study subjects in As-endemic areas. Thus, this study demonstrated a novel dose–response relationship between As exposure and plasma Big ET-1 levels indicating the possible involvement of plasma Big ET-1 levels in As-induced hypertension and skin lesions. -- Highlights: ► Plasma Big ET-1 is an indicator of endothelial damage. ► Plasma Big ET-1 level increases dose-dependently in arsenic exposed individuals. ► Study subjects in arsenic-endemic areas with hypertension have elevated Big ET-1 levels. ► Study subjects with arsenic

  3. Elevated levels of plasma Big endothelin-1 and its relation to hypertension and skin lesions in individuals exposed to arsenic

    Energy Technology Data Exchange (ETDEWEB)

    Hossain, Ekhtear; Islam, Khairul; Yeasmin, Fouzia [Department of Biochemistry and Molecular Biology, Rajshahi University, Rajshahi-6205 (Bangladesh); Karim, Md. Rezaul [Department of Applied Nutrition and Food Technology, Islamic University, Kushtia-7003 (Bangladesh); Rahman, Mashiur; Agarwal, Smita; Hossain, Shakhawoat; Aziz, Abdul; Al Mamun, Abdullah; Sheikh, Afzal; Haque, Abedul; Hossain, M. Tofazzal [Department of Biochemistry and Molecular Biology, Rajshahi University, Rajshahi-6205 (Bangladesh); Hossain, Mostaque [Department of Medicine, Bangladesh Institute of Research and Rehabilitation in Diabetes, Endocrine and Metabolic Disorders (BIRDEM), Dhaka (Bangladesh); Haris, Parvez I. [Faculty of Health and Life Sciences, De Montfort University, Leicester, LE1 9BH (United Kingdom); Ikemura, Noriaki; Inoue, Kiyoshi; Miyataka, Hideki; Himeno, Seiichiro [Laboratory of Molecular Nutrition and Toxicology, Faculty of Pharmaceutical Sciences, Tokushima Bunri University, Tokushima 770–8514 (Japan); Hossain, Khaled, E-mail: khossain69@yahoo.com [Department of Biochemistry and Molecular Biology, Rajshahi University, Rajshahi-6205 (Bangladesh)

    2012-03-01

    Chronic arsenic (As) exposure affects the endothelial system causing several diseases. Big endothelin-1 (Big ET-1), the biological precursor of endothelin-1 (ET-1) is a more accurate indicator of the degree of activation of the endothelial system. Effect of As exposure on the plasma Big ET-1 levels and its physiological implications have not yet been documented. We evaluated plasma Big ET-1 levels and their relation to hypertension and skin lesions in As exposed individuals in Bangladesh. A total of 304 study subjects from the As-endemic and non-endemic areas in Bangladesh were recruited for this study. As concentrations in water, hair and nails were measured by Inductively Coupled Plasma Mass Spectroscopy (ICP-MS). The plasma Big ET-1 levels were measured using a one-step sandwich enzyme immunoassay kit. Significant increase in Big ET-1 levels were observed with the increasing concentrations of As in drinking water, hair and nails. Further, before and after adjusting with different covariates, plasma Big ET-1 levels were found to be significantly associated with the water, hair and nail As concentrations of the study subjects. Big ET-1 levels were also higher in the higher exposure groups compared to the lowest (reference) group. Interestingly, we observed that Big ET-1 levels were significantly higher in the hypertensive and skin lesion groups compared to the normotensive and without skin lesion counterpart, respectively of the study subjects in As-endemic areas. Thus, this study demonstrated a novel dose–response relationship between As exposure and plasma Big ET-1 levels indicating the possible involvement of plasma Big ET-1 levels in As-induced hypertension and skin lesions. -- Highlights: ► Plasma Big ET-1 is an indicator of endothelial damage. ► Plasma Big ET-1 level increases dose-dependently in arsenic exposed individuals. ► Study subjects in arsenic-endemic areas with hypertension have elevated Big ET-1 levels. ► Study subjects with arsenic

  4. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  5. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  6. The Role of Phosphoramidon on the Biological Activity of Big Endothelin-1 in the Rat Mesenteric Microcirculation in Vivo

    International Nuclear Information System (INIS)

    Abdelhalim, Mohamed A K

    2008-01-01

    The goal of the present study was to clarify the role of metalloprotease inhibitor phosphoramidon on the effects induced by big endothelin-1 (big ET-1) in the rat mesenteric microcirculation in vivo, through investigating the systemic blood pressure, diameter and blood flow velocity of arterioles and venules of the rat mesentery. For this purpose, the rat mesentery was arranged for in situ intravital microscopic observation under transillumination and separate cumulative injections of big ET-1 and phosphoramidon were infused into the right jugular vein, respectively. In these experiments twenty-five rats (Charles River, 130 - 140 g) were used. The experiments were divided into two groups. In the first group of experiments, cumulative injections of big ET-1 (1000-8000 pmole/kg) were infused through a catheter inserted into the right jugular vein. Each dose of big ET-1 was infused 25 min prior to the infusion of the following dose. Infusion of big ET-1 (1000-8000 pmole/kg) elicited a long-lasting pressor effect. The infusion of low doses of big ET-1 (1000-2000 pmole/kg) elicited a significant (p < 0.05) dose-dependent increase in the microvascular blood flow velocity both in arterioles (20 - 30 ?m) and venules (30 - 50 ?m), and diameters of arterioles and venules exhibited a slight not significant vasodilator effect. The infusion of high doses of big ET-1 (4000-8000 pmole/kg) elicited significant dose-dependant decrease in the blood flow velocity of arterioles and venules, and diameters returned to the control runs. This may be attributed to the gradual conversion of big ET-1 to ET-1, and ET-1 is a potent vasoconstrictor. In the second group of experiments, cumulative injections of phosphoramidon (30 mg/kg /10 min) were administered 10 min prior to the infusion of big ET-1. These findings suggested that phosphoramidon significantly suppressed long-lasting pressor effect, dose-dependent increase, dose-dependent decrease and slow vasodilator effect produced by big ET-1

  7. BIG1, a brefeldin A-inhibited guanine nucleotide-exchange protein regulates neurite development via PI3K-AKT and ERK signaling pathways.

    Science.gov (United States)

    Zhou, C; Li, C; Li, D; Wang, Y; Shao, W; You, Y; Peng, J; Zhang, X; Lu, L; Shen, X

    2013-12-19

    The elongation of neuron is highly dependent on membrane trafficking. Brefeldin A (BFA)-inhibited guanine nucleotide-exchange protein 1 (BIG1) functions in the membrane trafficking between the Golgi apparatus and the plasma membrane. BFA, an uncompetitive inhibitor of BIG1 can inhibit neurite outgrowth and polarity development. In this study, we aimed to define the possible role of BIG1 in neurite development and to further investigate the potential mechanism. By immunostaining, we found that BIG1 was extensively colocalized with synaptophysin, a marker for synaptic vesicles in soma and partly in neurites. The amount of both protein and mRNA of BIG1 were up-regulated during rat brain development. BIG1 depletion significantly decreased the neurite length and inhibited the phosphorylation of phosphatidylinositide 3-kinase (PI3K) and protein kinase B (AKT). Inhibition of BIG1 guanine nucleotide-exchange factor (GEF) activity by BFA or overexpression of the dominant-negative BIG1 reduced PI3K and AKT phosphorylation, indicating regulatory effects of BIG1 on PI3K-AKT signaling pathway is dependent on its GEF activity. BIG1 siRNA or BFA treatment also significantly reduced extracellular signal-regulated kinase (ERK) phosphorylation. Overexpression of wild-type BIG1 significantly increased ERK phosphorylation, but the dominant-negative BIG1 had no effect on ERK phosphorylation, indicating the involvement of BIG1 in ERK signaling regulation may not be dependent on its GEF activity. Our result identified a novel function of BIG1 in neurite development. The newly recognized function integrates the function of BIG1 in membrane trafficking with the activation of PI3K-AKT and ERK signaling pathways which are critical in neurite development. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. HER2 status predicts for upfront AI benefit: A TRANS-AIOG meta-analysis of 12,129 patients from ATAC, BIG 1-98 and TEAM with centrally determined HER2.

    Science.gov (United States)

    Bartlett, John M S; Ahmed, Ikhlaaq; Regan, Meredith M; Sestak, Ivana; Mallon, Elizabeth A; Dell'Orto, Patrizia; Thürlimann, Beat; Seynaeve, Caroline; Putter, Hein; Van de Velde, Cornelis J H; Brookes, Cassandra L; Forbes, John F; Viale, Giuseppe; Cuzick, Jack; Dowsett, Mitchell; Rea, Daniel W

    2017-07-01

    A meta-analysis of the effects of HER2 status, specifically within the first 2-3 years of adjuvant endocrine therapy, has the potential to inform patient selection for upfront aromatase inhibitor (AI) therapy or switching strategy tamoxifen followed by AI. The pre-existing standardisation of methodology for HER2 (immunohistochemistry/fluorescence in situ hybridization) facilitates analysis of existing data for this key marker. Following a prospectively designed statistical analysis plan, patient data from 3 phase III trials Arimidex, Tamoxifen, Alone or in Combination Trial (ATAC), Breast International Group (BIG) 1-98 and Tamoxifen Exemestane Adjuvant Multicentre Trial (TEAM)] comparing an AI to tamoxifen during the first 2-3 years of adjuvant endocrine treatment were collected and a treatment-by-marker analysis of distant recurrence-free interval-censored at 2-3 years treatment - for HER2 status × AI versus tamoxifen treatment was performed to address the clinical question relating to efficacy of 'upfront' versus 'switch' strategies for AIs. A prospectively planned, patient-level data meta-analysis across 3 trials demonstrated a significant treatment (AI versus tamoxifen) by marker (HER2) interaction in a multivariate analysis; (interaction hazard ratio [HR] = 1.61, 95% CI 1.01-2.57; p data meta-analysis demonstrated a significant interaction between HER2 status and treatment with AI versus tamoxifen in the first 2-3 years of adjuvant endocrine therapy. Patients with HER2-ve cancers experienced improved outcomes (distant relapse) when treated with upfront AI rather than tamoxifen, whilst patients with HER2+ve cancers fared no better or slightly worse in the first 2-3 years. However, the small number of HER2+ve cancers/events may explain a large degree of heterogeneity in the HER2+ve groups across all 3 trials. Other causes, perhaps related to subtle differences between AIs, cannot be excluded and warrant further exploration. Copyright © 2017 Elsevier Ltd

  9. A randomised comparison of radical radiotherapy with or without chemotherapy for patients with non-small cell lung cancer: Results from the Big Lung Trial

    International Nuclear Information System (INIS)

    Fairlamb, David; Milroy, Robert; Gower, Nicole; Parmar, Mahesh; Peake, Michael; Rudd, Robin; Souhami, Robert; Spiro, Stephen; Stephens, Richard; Waller, David

    2005-01-01

    Background: A meta-analysis of trials comparing primary treatment with or without chemotherapy for patients with non-small cell lung cancer published in 1995 suggested a survival benefit for cisplatin-based chemotherapy in each of the primary treatment settings studied, but it included many small trials, and trials with differing eligibility criteria and chemotherapy regimens. Methods: The Big Lung Trial was a large pragmatic trial designed to confirm the survival benefits seen in the meta-analysis, and this paper reports the findings in the radical radiotherapy setting. The trial closed before the required sample size was achieved due to slow accrual, with a total of 288 patients randomised to receive radical radiotherapy alone (146 patients) or sequential radical radiotherapy and cisplatin-based chemotherapy (142 patients). Results: There was no evidence that patients allocated sequential chemotherapy and radical radiotherapy had a better survival than those allocated radical radiotherapy alone, HR 1.07 (95% CI 0.84-1.38, P=0.57), median survival 13.0 months for the sequential group and 13.2 for the radical radiotherapy alone group. In addition, exploratory analyses could not identify any subgroup that might benefit more or less from chemotherapy. Conclusions: Despite not suggesting a survival benefit for the sequential addition of chemotherapy to radical radiotherapy, possibly because of the relatively small sample size and consequently wide confidence intervals, the results can still be regarded as consistent with the meta-analysis, and other similarly designed recently published large trials. Combining all these results suggests there may be a small median survival benefit with chemotherapy of between 2 and 8 weeks

  10. Different pressor and bronchoconstrictor properties of human big-endothelin-1, 2 (1-38) and 3 in ketamine/xylazine-anaesthetized guinea-pigs.

    OpenAIRE

    Gratton, J P; Rae, G A; Claing, A; Télémaque, S; D'Orléans-Juste, P

    1995-01-01

    1. In the present study, the precursors of endothelin-1, endothelin-2 and endothelin-3 were tested for their pressor and bronchoconstrictor properties in the anaesthetized guinea-pig. In addition, the effects of big-endothelin-1 and endothelin-1 were assessed under urethane or ketamine/xylazine anaesthesia. 2. When compared to ketamine/xylazine, urethane markedly depressed the pressor and bronchoconstrictor properties of endothelin-1 and big-endothelin-1. 3. Under ketamine/xylazine anaesthesi...

  11. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  12. Seasonal shifts in the diet of the big brown bat (Eptesicus fuscus), Fort Collins, Colorado

    Science.gov (United States)

    Valdez, Ernest W.; O'Shea, Thomas J.

    2014-01-01

    Recent analyses suggest that the big brown bat (Eptesicus fuscus) may be less of a beetle specialist (Coleoptera) in the western United States than previously thought, and that its diet might also vary with temperature. We tested the hypothesis that big brown bats might opportunistically prey on moths by analyzing insect fragments in guano pellets from 30 individual bats (27 females and 3 males) captured while foraging in Fort Collins, Colorado, during May, late July–early August, and late September 2002. We found that bats sampled 17–20 May (n = 12 bats) had a high (81–83%) percentage of volume of lepidopterans in guano, with the remainder (17–19% volume) dipterans and no coleopterans. From 28 May–9 August (n = 17 bats) coleopterans dominated (74–98% volume). On 20 September (n = 1 bat) lepidopterans were 99% of volume in guano. Migratory miller moths (Euxoa auxiliaris) were unusually abundant in Fort Collins in spring and autumn of 2002 and are known agricultural pests as larvae (army cutworms), suggesting that seasonal dietary flexibility in big brown bats has economic benefits.

  13. Domain analyses of Usher syndrome causing Clarin-1 and GPR98 protein models.

    Science.gov (United States)

    Khan, Sehrish Haider; Javed, Muhammad Rizwan; Qasim, Muhammad; Shahzadi, Samar; Jalil, Asma; Rehman, Shahid Ur

    2014-01-01

    Usher syndrome is an autosomal recessive disorder that causes hearing loss, Retinitis Pigmentosa (RP) and vestibular dysfunction. It is clinically and genetically heterogeneous disorder which is clinically divided into three types i.e. type I, type II and type III. To date, there are about twelve loci and ten identified genes which are associated with Usher syndrome. A mutation in any of these genes e.g. CDH23, CLRN1, GPR98, MYO7A, PCDH15, USH1C, USH1G, USH2A and DFNB31 can result in Usher syndrome or non-syndromic deafness. These genes provide instructions for making proteins that play important roles in normal hearing, balance and vision. Studies have shown that protein structures of only seven genes have been determined experimentally and there are still three genes whose structures are unavailable. These genes are Clarin-1, GPR98 and Usherin. In the absence of an experimentally determined structure, homology modeling and threading often provide a useful 3D model of a protein. Therefore in the current study Clarin-1 and GPR98 proteins have been analyzed for signal peptide, domains and motifs. Clarin-1 protein was found to be without any signal peptide and consists of prokar lipoprotein domain. Clarin-1 is classified within claudin 2 super family and consists of twelve motifs. Whereas, GPR98 has a 29 amino acids long signal peptide and classified within GPCR family 2 having Concanavalin A-like lectin/glucanase superfamily. It was found to be consists of GPS and G protein receptor F2 domains and twenty nine motifs. Their 3D structures have been predicted using I-TASSER server. The model of Clarin-1 showed only α-helix but no beta sheets while model of GPR98 showed both α-helix and β sheets. The predicted structures were then evaluated and validated by MolProbity and Ramachandran plot. The evaluation of the predicted structures showed 78.9% residues of Clarin-1 and 78.9% residues of GPR98 within favored regions. The findings of present study has resulted in the

  14. Nup153 and Nup98 bind the HIV-1 core and contribute to the early steps of HIV-1 replication

    Energy Technology Data Exchange (ETDEWEB)

    Di Nunzio, Francesca, E-mail: francesca.di-nunzio@pasteur.fr [Molecular Virology and Vaccinology unit, CNRS URA 3015, Department of Virology, Institut Pasteur, 25-28 rue du Dr. Roux, 75015 Paris (France); Fricke, Thomas [Department of Microbiology and Immunology, Albert Einstein College of Medicine Bronx, NY 10461 (United States); Miccio, Annarita [University of Modena e Reggio Emilia, Centro di Medicina Rigenerativa, Modena (Italy); Valle-Casuso, Jose Carlos; Perez, Patricio [Department of Microbiology and Immunology, Albert Einstein College of Medicine Bronx, NY 10461 (United States); Souque, Philippe [Molecular Virology and Vaccinology unit, CNRS URA 3015, Department of Virology, Institut Pasteur, 25-28 rue du Dr. Roux, 75015 Paris (France); Rizzi, Ermanno; Severgnini, Marco [Institute of Biomedical Technologies, CNR, Milano (Italy); Mavilio, Fulvio [University of Modena e Reggio Emilia, Centro di Medicina Rigenerativa, Modena (Italy); Genethon, Evry (France); Charneau, Pierre [Molecular Virology and Vaccinology unit, CNRS URA 3015, Department of Virology, Institut Pasteur, 25-28 rue du Dr. Roux, 75015 Paris (France); Diaz-Griffero, Felipe, E-mail: felipe.diaz-griffero@einstein.yu.edu [Department of Microbiology and Immunology, Albert Einstein College of Medicine Bronx, NY 10461 (United States)

    2013-05-25

    The early steps of HIV-1 replication involve the entry of HIV-1 into the nucleus, which is characterized by viral interactions with nuclear pore components. HIV-1 developed an evolutionary strategy to usurp the nuclear pore machinery and chromatin in order to integrate and efficiently express viral genes. In the current work, we studied the role of nucleoporins 153 and 98 (Nup153 and Nup98) in infection of human Jurkat lymphocytes by HIV-1. We showed that Nup153-depleted cells exhibited a defect in nuclear import, while depletion of Nup 98 caused a slight defect in HIV integration. To explore the biochemical viral determinants for the requirement of Nup153 and Nup98 during HIV-1 infection, we tested the ability of these nucleoporins to interact with HIV-1 cores. Our findings showed that both nucleoporins bind HIV-1 cores suggesting that this interaction is important for HIV-1 nuclear import and/or integration. Distribution analysis of integration sites in Nup153-depleted cells revealed a reduced tendency of HIV-1 to integrate in intragenic sites, which in part could account for the large infectivity defect observed in Nup153-depleted cells. Our work strongly supports a role for Nup153 in HIV-1 nuclear import and integration. - Highlights: ► We studied the role of Nup98 and Nup153 in HIV-1 infection. ► Nup98 binds the HIV-1 core and is involved in HIV-1 integration. ► Nup153 binds the HIV-1 core and is involved in HIV-1 nuclear import. ► Depletion of Nup153 decreased the integration of HIV-1 in transcriptionally active sites.

  15. Nup153 and Nup98 bind the HIV-1 core and contribute to the early steps of HIV-1 replication

    International Nuclear Information System (INIS)

    Di Nunzio, Francesca; Fricke, Thomas; Miccio, Annarita; Valle-Casuso, Jose Carlos; Perez, Patricio; Souque, Philippe; Rizzi, Ermanno; Severgnini, Marco; Mavilio, Fulvio; Charneau, Pierre; Diaz-Griffero, Felipe

    2013-01-01

    The early steps of HIV-1 replication involve the entry of HIV-1 into the nucleus, which is characterized by viral interactions with nuclear pore components. HIV-1 developed an evolutionary strategy to usurp the nuclear pore machinery and chromatin in order to integrate and efficiently express viral genes. In the current work, we studied the role of nucleoporins 153 and 98 (Nup153 and Nup98) in infection of human Jurkat lymphocytes by HIV-1. We showed that Nup153-depleted cells exhibited a defect in nuclear import, while depletion of Nup 98 caused a slight defect in HIV integration. To explore the biochemical viral determinants for the requirement of Nup153 and Nup98 during HIV-1 infection, we tested the ability of these nucleoporins to interact with HIV-1 cores. Our findings showed that both nucleoporins bind HIV-1 cores suggesting that this interaction is important for HIV-1 nuclear import and/or integration. Distribution analysis of integration sites in Nup153-depleted cells revealed a reduced tendency of HIV-1 to integrate in intragenic sites, which in part could account for the large infectivity defect observed in Nup153-depleted cells. Our work strongly supports a role for Nup153 in HIV-1 nuclear import and integration. - Highlights: ► We studied the role of Nup98 and Nup153 in HIV-1 infection. ► Nup98 binds the HIV-1 core and is involved in HIV-1 integration. ► Nup153 binds the HIV-1 core and is involved in HIV-1 nuclear import. ► Depletion of Nup153 decreased the integration of HIV-1 in transcriptionally active sites

  16. Big endothelin-1 and nitric oxide in hypertensive elderly patients with and without obstructive sleep apnea-hypopnea syndrome.

    Science.gov (United States)

    Anunciato, Iara Felicio; Lobo, Rômulo Rebouças; Coelho, Eduardo Barbosa; Verri, Waldiceu Aparecido; Eckeli, Alan Luiz; Evora, Paulo Roberto Barbosa; Nobre, Fernando; Moriguti, Júlio César; Ferriolli, Eduardo; Lima, Nereida Kilza da Costa

    2013-10-01

    The role of oxidative stress in hypertensive elderly patients with obstructive sleep apnea-hypopnea syndrome (OSAHS) is unknown. The purpose was to evaluate the levels of big endothelin-1 (Big ET-1) and nitric oxide (NO) in elderly hypertensive patients with and without moderate to severe OSAHS. Volunteers were hospitalized for 24 h. We obtained the following data: body mass index (BMI); 24-ambulatory blood pressure monitoring; and current medication. Arterial blood was collected at 7 pm and 7 am for determining plasma NO and Big ET-1 levels. Pulse oximetry was performed during sleep. Pearson's or Spearman's correlation and univariate analysis of variance were used for statistical analysis. We studied 25 subjects with OSAHS (group 1) and 12 without OSAHS (group 2) aged 67.0 ± 6.5 years and 67.8 ± 6.8 years, respectively. No significant differences were observed between the groups in BMI; number of hours of sleep; 24-h systolic and diastolic BPs; awake BP, sleep BP and medications to control BP between groups. No differences were detected in plasma Big ET-1 and NO levels at 19:00 h, but plasma Big ET-1 levels at 7:00 h were higher in group 1 (p =0.03). In group 1, a negative correlation was also observed between the mean arterial oxyhemoglobin saturation level, 24-h systolic BP (p = 0.03, r = -0.44), and Big ET-1 (p = 0.04, r = -0.41). On comparing elderly hypertensive patients with and without OSAHS having similar BP and BMI, we observed higher Big ET-1 levels After sleep in the OSAHS group. NO levels did not differ between the hypertensive patients with or without OSAHS.

  17. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  18. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  19. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  20. Randomised controlled trial of labouring in water compared with standard of augmentation for management of dystocia in first stage of labour

    Science.gov (United States)

    Cluett, Elizabeth R; Pickering, Ruth M; Getliffe, Kathryn; Saunders, Nigel James St George

    2004-01-01

    Objectives To evaluate the impact of labouring in water during first stage of labour on rates of epidural analgesia and operative delivery in nulliparous women with dystocia. Design Randomised controlled trial. Setting University teaching hospital in southern England. Participants 99 nulliparous women with dystocia (cervical dilation rate dystocia (amniotomy and intravenous oxytocin). Main outcome measures Primary: epidural analgesia and operative delivery rates. Secondary: augmentation rates with amniotomy and oxytocin, length of labour, maternal and neonatal morbidity including infections, maternal pain score, and maternal satisfaction with care. Results Women randomised to immersion in water had a lower rate of epidural analgesia than women allocated to augmentation (47% v 66%, relative risk 0.71 (95% confidence interval 0.49 to 1.01), number needed to treat for benefit (NNT) 5). They showed no difference in rates of operative delivery (49% v 50%, 0.98 (0.65 to 1.47), NNT 98), but significantly fewer received augmentation (71% v 96%, 0.74 (0.59 to 0.88), NNT 4) or any form of obstetric intervention (amniotomy, oxytocin, epidural, or operative delivery) (80% v 98%, 0.81 (0.67 to 0.92), NNT 5). More neonates of women in the water group were admitted to the neonatal unit (6 v 0, P = 0.013), but there was no difference in Apgar score, infection rates, or umbilical cord pH. Conclusions Labouring in water under midwifery care may be an option for slow progress in labour, reducing the need for obstetric intervention, and offering an alternative pain management strategy. PMID:14744822

  1. 29 CFR 98.1005 - State.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true State. 98.1005 Section 98.1005 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.1005 State. (a) State means— (1) Any of the states of the United States; (2) The District of Columbia; (3) The...

  2. The Collaborative Randomised Amnioinfusion for Meconium Project (CRAMP): 1. South Africa.

    Science.gov (United States)

    Hofmeyr, G J; Gülmezoğlu, A M; Buchmann, E; Howarth, G R; Shaw, A; Nikodem, V C; Cronje, H; de Jager, M; Mahomed, K

    1998-03-01

    To evaluate transcervical amnioinfusion for meconium stained amniotic fluid during labout. Multicentre randomised controlled trial. Four urban academic hospitals in South Africa. Obstetric surveillance included the use of electronic fetal heart rate monitoring in most cases. Women in labour at term with moderate or thick meconium staining of the amniotic fluid. Transcervical amnioinfusion of 800 mL saline at 15 mL per minute, followed by a maintenance infusion at 3 mL per minute. The control group received routine care. Blinding of the intervention was not possible. Caesarean section, meconium aspiration syndrome and perinatal mortality. Caesarean section rates were similar (amnioinfusion group 70/167 vs control group 68/159; RR 0.98, 95% CI 0.76-1.26). The incidence of meconium aspiration syndrome was lower than expected on the basis of previous studies (4/162 vs 6/163; RR 0.67, 95% CI 0.19-2.33). There were no perinatal deaths. There were no significant differences between any of the subsidiary outcomes. This study concurred with three previous trials which found no effect of amnioinfusion for meconium-stained amniotic fluid on caesarean section rate, though the pooled data from all identified trials to date show a significant reduction. The findings with respect to meconium aspiration syndrome were inconclusive in this study alone because of the small number of babies affected, but the point estimate of the relative risk was consistent with the finding of a significant reduction in previous studies and with the Zimbabwe arm (CRAMP 2) of this study. Pooled data clearly support the use of amnioinfusion for meconium stained amniotic fluid to reduce the incidence of meconium aspiration syndrome.

  3. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  4. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  5. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    Science.gov (United States)

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  6. Insecticide-treated nets for the prevention of malaria in pregnancy: a systematic review of randomised controlled trials.

    Directory of Open Access Journals (Sweden)

    Carol Gamble

    2007-03-01

    Full Text Available BACKGROUND: Protection from malaria with insecticide-treated bednets (ITNs during pregnancy is widely advocated, but evidence of benefit has been inconsistent. We undertook a systematic review of randomised trials. METHODS AND FINDINGS: Three cluster-randomised and two individually randomised trials met the inclusion criteria; four from Africa (n = 6,418 and one from Thailand (n = 223. In Africa, ITNs compared to no nets increased mean birth weight by 55 g (95% confidence interval [CI] 21-88, reduced low birth weight by 23% (relative risk [RR] 0.77, 95% CI 0.61-0.98, and reduced miscarriages/stillbirths by 33% (RR 0.67, 0.47-0.97 in the first few pregnancies. Placental parasitaemia was reduced by 23% in all gravidae (RR 0.77, 0.66-0.90. The effects were apparent in the cluster-randomised trials and the one individually randomised trial in Africa. The trial in Thailand, which randomised individuals to ITNs or untreated nets, showed reductions in anaemia and fetal loss in all gravidae, but not reductions in clinical malaria or low birth weight. CONCLUSIONS: ITNs used throughout pregnancy or from mid-pregnancy onwards have a beneficial impact on pregnancy outcome in malaria-endemic Africa in the first few pregnancies. The potential impact of ITNs in pregnant women and their newborns in malaria regions outside Africa requires further research.

  7. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  8. Lasing characteristics of 1.5 μm GaInAsP-InP SCH-BIG-DR lasers

    International Nuclear Information System (INIS)

    Shim, J.I.; Komori, K.; Arai, S.; Arima, I.; Suematsu, Y.; Somchai, R.

    1991-01-01

    This paper reports that higher differential quantum efficiency η df , smaller chirp Δλ, narrower linewidth Δν, and high coupling efficiency C out between active and passive regions of 1.5 μm GaInAsP/In PSCH-BIG-Dr lasers with a thin active layer and asymmetric gratings are experimentally demonstrated by a comparison of those properties of DFB lasers from the same wafer. A submode suppression ratio (SMSR) of more than 39 dB at 1.7 times the threshold current and a C out of almost 100% in the SCH-BIG-DR lasers indicated that a separate-confinement-heterostructure (SCH) with a thin active layer and a bundle-integrated-guide (BIG) structure are quite adequate to realize high-performance distributed reflector (DR-type lasers) a two times larger η df and 16 times larger front-to-rear output ratio (FROR) of SCH-BIG-DR lasers that those of DFB lasers showed the superiority of DR lasers for high-efficiency operation. Furthermore, superior spectral characteristics of the SCH-BIG-DR laser in comparison to those of conventional DFB lasers, i.e., only about a half of chirping under a rapid direction modulation, as well as 1/5 of the linewidth-power product, were also obtained, which originated from the smaller effective linewidth enhancement factor α eff of the DR laser

  9. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  10. Fusion of NUP98 and the SET binding protein 1 (SETBP1) gene in a paediatric acute T cell lymphoblastic leukaemia with t(11;18)(p15;q12)

    DEFF Research Database (Denmark)

    Panagopoulos, Ioannis; Kerndrup, Gitte; Carlsen, Niels

    2007-01-01

    Three NUP98 chimaeras have previously been reported in T cell acute lymphoblastic leukaemia (T-ALL): NUP98/ADD3, NUP98/CCDC28A, and NUP98/RAP1GDS1. We report a T-ALL with t(11;18)(p15;q12) resulting in a novel NUP98 fusion. Fluorescent in situ hybridisation showed NUP98 and SET binding protein 1(...... in leukaemias; however, it encodes a protein that specifically interacts with SET, fused to NUP214 in a case of acute undifferentiated leukaemia.......Three NUP98 chimaeras have previously been reported in T cell acute lymphoblastic leukaemia (T-ALL): NUP98/ADD3, NUP98/CCDC28A, and NUP98/RAP1GDS1. We report a T-ALL with t(11;18)(p15;q12) resulting in a novel NUP98 fusion. Fluorescent in situ hybridisation showed NUP98 and SET binding protein 1...

  11. 45 CFR 98.90 - Monitoring.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Monitoring. 98.90 Section 98.90 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Monitoring, Non-compliance and Complaints § 98.90 Monitoring. (a) The Secretary will monitor programs funded under...

  12. 29 CFR 98.1000 - Respondent.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Respondent. 98.1000 Section 98.1000 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.1000 Respondent. Respondent means a person against whom an agency has initiated a debarment or suspension action. ...

  13. 34 CFR 98.6 - Reports.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Reports. 98.6 Section 98.6 Education Office of the Secretary, Department of Education STUDENT RIGHTS IN RESEARCH, EXPERIMENTAL PROGRAMS, AND TESTING § 98.6 Reports. The Secretary may require the recipient to submit reports containing information necessary to...

  14. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  15. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  16. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  17. MicroRNA-98 rescues proliferation and alleviates ox-LDL-induced apoptosis in HUVECs by targeting LOX-1

    Science.gov (United States)

    Chen, Zhibo; Wang, Mian; He, Qiong; Li, Zilun; Zhao, Yang; Wang, Wenjian; Ma, Jieyi; Li, Yongxin; Chang, Guangqi

    2017-01-01

    Oxidized low-density lipoprotein (ox-LDL) is a major and critical mediator of atherosclerosis, and the underlying mechanism is thought to involve the ox-LDL-induced dysfunction of endothelial cells (ECs). MicroRNAs (miRNAs), which are a group of small non-coding RNA molecules that post-transcriptionally regulate the expression of target genes, have been associated with diverse cellular functions and the pathogenesis of various diseases, including atherosclerosis. miRNA-98 (miR-98) has been demonstrated to be involved in the regulation of cellular apoptosis; however, the role of miR-98 in ox-LDL-induced dysfunction of ECs and atherosclerosis has yet to be elucidated. Therefore, the present study aimed to investigate the role of miR-98 in ox-LDL-induced dysfunction of ECs and the underlying mechanism. It was demonstrated that miR-98 expression was markedly downregulated in ox-LDL-treated human umbilical vein ECs (HUVECs) and that miR-98 promoted the proliferation and alleviated apoptosis of HUVECs exposed to ox-LDL. In addition, the results demonstrated that lectin-like oxidized low-density lipoprotein receptor 1 (LOX-1) was a direct target of miR-98 in HUVECs, as indicated by a luciferase assay. The results of the present study suggested that miR-98 may inhibit the uptake of toxic ox-LDL, maintain HUVEC proliferation and protect HUVECs against apoptosis via the suppression of LOX-1. PMID:28565756

  18. 9 CFR 98.30 - Definitions.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Definitions. 98.30 Section 98.30... EMBRYOS AND ANIMAL SEMEN Certain Animal Semen § 98.30 Definitions. Whenever in this subpart of the... (England, Scotland, Wales, the Isle of Man, and Northern Ireland). Cattle. Animals of the bovine species...

  19. D-branes in a big bang/big crunch universe: Misner space

    International Nuclear Information System (INIS)

    Hikida, Yasuaki; Nayak, Rashmi R.; Panigrahi, Kamal L.

    2005-01-01

    We study D-branes in a two-dimensional lorentzian orbifold R 1,1 /Γ with a discrete boost Γ. This space is known as Misner or Milne space, and includes big crunch/big bang singularity. In this space, there are D0-branes in spiral orbits and D1-branes with or without flux on them. In particular, we observe imaginary parts of partition functions, and interpret them as the rates of open string pair creation for D0-branes and emission of winding closed strings for D1-branes. These phenomena occur due to the time-dependence of the background. Open string 2→2 scattering amplitude on a D1-brane is also computed and found to be less singular than closed string case

  20. D-branes in a big bang/big crunch universe: Misner space

    Energy Technology Data Exchange (ETDEWEB)

    Hikida, Yasuaki [Theory Group, High Energy Accelerator Research Organization (KEK), Tukuba, Ibaraki 305-0801 (Japan); Nayak, Rashmi R. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' , Rome 00133 (Italy); Panigrahi, Kamal L. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' , Rome 00133 (Italy)

    2005-09-01

    We study D-branes in a two-dimensional lorentzian orbifold R{sup 1,1}/{gamma} with a discrete boost {gamma}. This space is known as Misner or Milne space, and includes big crunch/big bang singularity. In this space, there are D0-branes in spiral orbits and D1-branes with or without flux on them. In particular, we observe imaginary parts of partition functions, and interpret them as the rates of open string pair creation for D0-branes and emission of winding closed strings for D1-branes. These phenomena occur due to the time-dependence of the background. Open string 2{yields}2 scattering amplitude on a D1-brane is also computed and found to be less singular than closed string case.

  1. HoYbBIG epitaxial thick films used for Faraday rotator in the 1.55μm band

    International Nuclear Information System (INIS)

    Zhong, Z.W.; Xu, X.W.; Chong, T.C.; Yuan, S.N.; Li, M.H.; Zhang, G.Y.; Freeman, B.

    2005-01-01

    Ho 3-x-y Yb y Bi x Fe 5 O 12 (HoYbBIG) garnet thick films with Bi content of x=0.9-1.5 were prepared by the liquid phase epitaxy (LPE) method. Optical properties and magneto-optical properties were characterized. The LPE-grown HoYbBIG thick films exhibited large Faraday rotation coefficients up to 1540 o /cm at 1.55μm, and good wavelength and temperature stability

  2. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  3. 45 CFR 98.14 - Plan process.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Plan process. 98.14 Section 98.14 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND General Application Procedures § 98.14 Plan process. In the development of each Plan, as required pursuant to § 98.17...

  4. Cohort Randomised Controlled Trial of a Multifaceted Podiatry Intervention for the Prevention of Falls in Older People (The REFORM Trial).

    Science.gov (United States)

    Cockayne, Sarah; Adamson, Joy; Clarke, Arabella; Corbacho, Belen; Fairhurst, Caroline; Green, Lorraine; Hewitt, Catherine E; Hicks, Kate; Kenan, Anne-Maree; Lamb, Sarah E; McIntosh, Caroline; Menz, Hylton B; Redmond, Anthony C; Richardson, Zoe; Rodgers, Sara; Vernon, Wesley; Watson, Judith; Torgerson, David J

    2017-01-01

    Falls are a major cause of morbidity among older people. A multifaceted podiatry intervention may reduce the risk of falling. This study evaluated such an intervention. Pragmatic cohort randomised controlled trial in England and Ireland. 1010 participants were randomised (493 to the Intervention group and 517 to Usual Care) to either: a podiatry intervention, including foot and ankle exercises, foot orthoses and, if required, new footwear, and a falls prevention leaflet or usual podiatry treatment plus a falls prevention leaflet. The primary outcome was the incidence rate of self-reported falls per participant in the 12 months following randomisation. Secondary outcomes included: proportion of fallers and those reporting multiple falls, time to first fall, fear of falling, Frenchay Activities Index, Geriatric Depression Scale, foot pain, health related quality of life, and cost-effectiveness. In the primary analysis were 484 (98.2%) intervention and 507 (98.1%) control participants. There was a small, non statistically significant reduction in the incidence rate of falls in the intervention group (adjusted incidence rate ratio 0.88, 95% CI 0.73 to 1.05, p = 0.16). The proportion of participants experiencing a fall was lower (49.7 vs 54.9%, adjusted odds ratio 0.78, 95% CI 0.60 to 1.00, p = 0.05) as was the proportion experiencing two or more falls (27.6% vs 34.6%, adjusted odds ratio 0.69, 95% CI 0.52 to 0.90, p = 0.01). There was an increase (p = 0.02) in foot pain for the intervention group. There were no statistically significant differences in other outcomes. The intervention was more costly but marginally more beneficial in terms of health-related quality of life (mean quality adjusted life year (QALY) difference 0.0129, 95% CI -0.0050 to 0.0314) and had a 65% probability of being cost-effective at a threshold of £30,000 per QALY gained. There was a small reduction in falls. The intervention may be cost-effective. ISRCTN ISRCTN68240461.

  5. Cohort Randomised Controlled Trial of a Multifaceted Podiatry Intervention for the Prevention of Falls in Older People (The REFORM Trial.

    Directory of Open Access Journals (Sweden)

    Sarah Cockayne

    Full Text Available Falls are a major cause of morbidity among older people. A multifaceted podiatry intervention may reduce the risk of falling. This study evaluated such an intervention.Pragmatic cohort randomised controlled trial in England and Ireland. 1010 participants were randomised (493 to the Intervention group and 517 to Usual Care to either: a podiatry intervention, including foot and ankle exercises, foot orthoses and, if required, new footwear, and a falls prevention leaflet or usual podiatry treatment plus a falls prevention leaflet. The primary outcome was the incidence rate of self-reported falls per participant in the 12 months following randomisation. Secondary outcomes included: proportion of fallers and those reporting multiple falls, time to first fall, fear of falling, Frenchay Activities Index, Geriatric Depression Scale, foot pain, health related quality of life, and cost-effectiveness.In the primary analysis were 484 (98.2% intervention and 507 (98.1% control participants. There was a small, non statistically significant reduction in the incidence rate of falls in the intervention group (adjusted incidence rate ratio 0.88, 95% CI 0.73 to 1.05, p = 0.16. The proportion of participants experiencing a fall was lower (49.7 vs 54.9%, adjusted odds ratio 0.78, 95% CI 0.60 to 1.00, p = 0.05 as was the proportion experiencing two or more falls (27.6% vs 34.6%, adjusted odds ratio 0.69, 95% CI 0.52 to 0.90, p = 0.01. There was an increase (p = 0.02 in foot pain for the intervention group. There were no statistically significant differences in other outcomes. The intervention was more costly but marginally more beneficial in terms of health-related quality of life (mean quality adjusted life year (QALY difference 0.0129, 95% CI -0.0050 to 0.0314 and had a 65% probability of being cost-effective at a threshold of £30,000 per QALY gained.There was a small reduction in falls. The intervention may be cost-effective.ISRCTN ISRCTN68240461.

  6. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  7. BIG1 is required for the survival of deep layer neurons, neuronal polarity, and the formation of axonal tracts between the thalamus and neocortex in developing brain.

    Directory of Open Access Journals (Sweden)

    Jia-Jie Teoh

    Full Text Available BIG1, an activator protein of the small GTPase, Arf, and encoded by the Arfgef1 gene, is one of candidate genes for epileptic encephalopathy. To know the involvement of BIG1 in epileptic encephalopathy, we analyzed BIG1-deficient mice and found that BIG1 regulates neurite outgrowth and brain development in vitro and in vivo. The loss of BIG1 decreased the size of the neocortex and hippocampus. In BIG1-deficient mice, the neuronal progenitor cells (NPCs and the interneurons were unaffected. However, Tbr1+ and Ctip2+ deep layer (DL neurons showed spatial-temporal dependent apoptosis. This apoptosis gradually progressed from the piriform cortex (PIR, peaked in the neocortex, and then progressed into the hippocampus from embryonic day 13.5 (E13.5 to E17.5. The upper layer (UL and DL order in the neocortex was maintained in BIG1-deficient mice, but the excitatory neurons tended to accumulate before their destination layers. Further pulse-chase migration assay showed that the migration defect was non-cell autonomous and secondary to the progression of apoptosis into the BIG1-deficient neocortex after E15.5. In BIG1-deficient mice, we observed an ectopic projection of corticothalamic axons from the primary somatosensory cortex (S1 into the dorsal lateral geniculate nucleus (dLGN. The thalamocortical axons were unable to cross the diencephalon-telencephalon boundary (DTB. In vitro, BIG1-deficient neurons showed a delay in neuronal polarization. BIG1-deficient neurons were also hypersensitive to low dose glutamate (5 μM, and died via apoptosis. This study showed the role of BIG1 in the survival of DL neurons in developing embryonic brain and in the generation of neuronal polarity.

  8. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  9. Dextrose gel for neonatal hypoglycaemia (the Sugar Babies Study): a randomised, double-blind, placebo-controlled trial.

    Science.gov (United States)

    Harris, Deborah L; Weston, Philip J; Signal, Matthew; Chase, J Geoffrey; Harding, Jane E

    2013-12-21

    Neonatal hypoglycaemia is common, and a preventable cause of brain damage. Dextrose gel is used to reverse hypoglycaemia in individuals with diabetes; however, little evidence exists for its use in babies. We aimed to assess whether treatment with dextrose gel was more effective than feeding alone for reversal of neonatal hypoglycaemia in at-risk babies. We undertook a randomised, double-blind, placebo-controlled trial at a tertiary centre in New Zealand between Dec 1, 2008, and Nov 31, 2010. Babies aged 35-42 weeks' gestation, younger than 48-h-old, and at risk of hypoglycaemia were randomly assigned (1:1), via computer-generated blocked randomisation, to 40% dextrose gel 200 mg/kg or placebo gel. Randomisation was stratified by maternal diabetes and birthweight. Group allocation was concealed from clinicians, families, and all study investigators. The primary outcome was treatment failure, defined as a blood glucose concentration of less than 2·6 mmol/L after two treatment attempts. Analysis was by intention to treat. The trial is registered with Australian New Zealand Clinical Trials Registry, number ACTRN12608000623392. Of 514 enrolled babies, 242 (47%) became hypoglycaemic and were randomised. Five babies were randomised in error, leaving 237 for analysis: 118 (50%) in the dextrose group and 119 (50%) in the placebo group. Dextrose gel reduced the frequency of treatment failure compared with placebo (16 [14%] vs 29 [24%]; relative risk 0·57, 95% CI 0·33-0·98; p=0·04). We noted no serious adverse events. Three (3%) babies in the placebo group each had one blood glucose concentration of 0·9 mmol/L. No other adverse events took place. Treatment with dextrose gel is inexpensive and simple to administer. Dextrose gel should be considered for first-line treatment to manage hypoglycaemia in late preterm and term babies in the first 48 h after birth. Waikato Medical Research Foundation, the Auckland Medical Research Foundation, the Maurice and Phyllis Paykel

  10. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  11. Role of endothelin-converting enzyme, chymase and neutral endopeptidase in the processing of big ET-1, ET-1(1-21) and ET-1(1-31) in the trachea of allergic mice.

    Science.gov (United States)

    De Campo, Benjamin A; Goldie, Roy G; Jeng, Arco Y; Henry, Peter J

    2002-08-01

    The present study examined the roles of endothelin-converting enzyme (ECE), neutral endopeptidase (NEP) and mast cell chymase as processors of the endothelin (ET) analogues ET-1(1-21), ET-1(1-31) and big ET-1 in the trachea of allergic mice. Male CBA/CaH mice were sensitized with ovalbumin (10 microg) delivered intraperitoneal on days 1 and 14, and exposed to aerosolized ovalbumin on days 14, 25, 26 and 27 (OVA mice). Mice were killed and the trachea excised for histological analysis and contraction studies on day 28. Tracheae from OVA mice had 40% more mast cells than vehicle-sensitized mice (sham mice). Ovalbumin (10 microg/ml) induced transient contractions (15+/-3% of the C(max)) in tracheae from OVA mice. The ECE inhibitor CGS35066 (10 microM) inhibited contractions induced by big ET-1 (4.8-fold rightward shift of dose-response curve; Peffect on contractions induced by any of the ET analogues used. The NEP inhibitor CGS24592 (10 microM) inhibited contractions induced by ET-1(1-31) (6.2-fold rightward shift; Pbig ET-1. These data suggest that big ET-1 is processed predominantly by a CGS35066-sensitive ECE within allergic airways rather than by mast cell-derived proteases such as chymase. If endogenous ET-1(1-31) is formed within allergic airways, it is likely to undergo further conversion by NEP to more active products.

  12. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  13. 45 CFR 98.33 - Consumer education.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Consumer education. 98.33 Section 98.33 Public... Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.33 Consumer education... public consumer education information that will promote informed child care choices including, at a...

  14. 45 CFR 98.32 - Parental complaints.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Parental complaints. 98.32 Section 98.32 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.32 Parental complaints...

  15. 22 CFR 9.8 - Classification challenges.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Classification challenges. 9.8 Section 9.8 Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.8 Classification... classification status is improper are expected and encouraged to challenge the classification status of the...

  16. Minimal access surgery compared with medical management for gastro-oesophageal reflux disease: five year follow-up of a randomised controlled trial (REFLUX)

    Science.gov (United States)

    Cotton, S C; Boachie, C; Ramsay, C R; Krukowski, Z H; Heading, R C; Campbell, M K

    2013-01-01

    Objectives To determine the long term clinical effectiveness of laparoscopic fundoplication as an alternative to drug treatment for chronic gastro-oesophageal reflux disease (GORD). Design Five year follow-up of multicentre, pragmatic randomised trial (with parallel non-randomised preference groups). Setting Initial recruitment in 21 UK hospitals. Participants Responders to annual questionnaires among 810 original participants. At entry, all had had GORD for >12 months. Intervention The surgeon chose the type of fundoplication. Medical therapy was reviewed and optimised by a specialist. Subsequent management was at the discretion of the clinician responsible for care, usually in primary care. Main outcome measures Primary outcome measure was self reported quality of life score on disease-specific REFLUX questionnaire. Other measures were health status (with SF-36 and EuroQol EQ-5D questionnaires), use of antireflux medication, and complications. Results By five years, 63% (112/178) of patients randomised to surgery and 13% (24/179) of those randomised to medical management had received a fundoplication (plus 85% (222/261) and 3% (6/192) of those who expressed a preference for surgery and for medical management). Among responders at 5 years, 44% (56/127) of those randomised to surgery were taking antireflux medication versus 82% (98/119) of those randomised to medical management. Differences in the REFLUX score significantly favoured the randomised surgery group (mean difference 8.5 (95% CI 3.9 to 13.1), Preflux-related operations—most often revision of the wrap. Long term rates of dysphagia, flatulence, and inability to vomit were similar in the two randomised groups. Conclusions After five years, laparoscopic fundoplication continued to provide better relief of GORD symptoms than medical management. Adverse effects of surgery were uncommon and generally observed soon after surgery. A small proportion had re-operations. There was no evidence of long term adverse

  17. 45 CFR 98.31 - Parental access.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Parental access. 98.31 Section 98.31 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.31 Parental access. The...

  18. 29 CFR 98.920 - Civil judgment.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Civil judgment. 98.920 Section 98.920 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.920 Civil judgment. Civil judgment means the disposition of a civil action by any court of competent jurisdiction...

  19. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  20. 45 CFR 98.30 - Parental choice.

    Science.gov (United States)

    2010-10-01

    ... Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.30 Parental choice. (a... category of care; or (2) Having the effect of limiting parental access to or choice from among such... 45 Public Welfare 1 2010-10-01 2010-10-01 false Parental choice. 98.30 Section 98.30 Public...

  1. 27 CFR 6.98 - Advertising service.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Advertising service. 6.98 Section 6.98 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Exceptions § 6.98 Advertising service. The listing of the names...

  2. Advanced Modeling in Excel: from Water Jets to Big Bang

    Science.gov (United States)

    Ignatova, Olga; Chyzhyk, D.; Willis, C.; Kazachkov, A.

    2006-12-01

    An international students’ project is presented focused on application of Open Office and Excel spreadsheets for modeling of projectile-motion type dynamical systems. Variation of the parameters of plotted and animated families of jets flowing at different angles out of the holes in the wall of water-filled reservoir [1,2] revealed unexpected peculiarities of the envelopes, vertices, intersections and landing points of virtual trajectories. Comparison with real-life systems and rigorous calculations were performed to prove predictions of computer experiments. By same technique, the kinematics of fireworks was analyzed. On this basis two-dimensional ‘firework’ computer model of Big Bang was designed and studied, its relevance and limitations checked. 1.R.Ehrlich, Turning the World Inside Out, (Princeton University Press, Princeton, NJ, 1990), pp. 98-100. 2.A.Kazachkov, Yu.Bogdan, N.Makarovsky, N.Nedbailo. A Bucketful of Physics, in R.Pinto, S.Surinach (eds), International Conference Physics Teacher Education Beyond 2000. Selected Contributions (Elsevier Editions, Paris, 2001), pp.563-564. Sponsored by Courtney Willis.

  3. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  4. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  5. 49 CFR 98.2 - Definitions.

    Science.gov (United States)

    2010-10-01

    ... Administration. (5) The National Highway Traffic Safety Administration. (6) The Urban Mass Transportation... 49 Transportation 1 2010-10-01 2010-10-01 false Definitions. 98.2 Section 98.2 Transportation Office of the Secretary of Transportation ENFORCEMENT OF RESTRICTIONS ON POST-EMPLOYMENT ACTIVITIES...

  6. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  7. Classical propagation of strings across a big crunch/big bang singularity

    International Nuclear Information System (INIS)

    Niz, Gustavo; Turok, Neil

    2007-01-01

    One of the simplest time-dependent solutions of M theory consists of nine-dimensional Euclidean space times 1+1-dimensional compactified Milne space-time. With a further modding out by Z 2 , the space-time represents two orbifold planes which collide and re-emerge, a process proposed as an explanation of the hot big bang [J. Khoury, B. A. Ovrut, P. J. Steinhardt, and N. Turok, Phys. Rev. D 64, 123522 (2001).][P. J. Steinhardt and N. Turok, Science 296, 1436 (2002).][N. Turok, M. Perry, and P. J. Steinhardt, Phys. Rev. D 70, 106004 (2004).]. When the two planes are near, the light states of the theory consist of winding M2-branes, describing fundamental strings in a particular ten-dimensional background. They suffer no blue-shift as the M theory dimension collapses, and their equations of motion are regular across the transition from big crunch to big bang. In this paper, we study the classical evolution of fundamental strings across the singularity in some detail. We also develop a simple semiclassical approximation to the quantum evolution which allows one to compute the quantum production of excitations on the string and implement it in a simplified example

  8. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  9. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  10. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  11. Study of the potentiometric properties of spinel-type manganese oxide doped with gallium and anions Ga0.02Mn1.98O3.98X0.02 (X = S2− and F−) as selective sensor for lithium ion

    International Nuclear Information System (INIS)

    David-Parra, Diego N.; Bocchi, Nerilso; Teixeira, Marcos F.S.

    2015-01-01

    Highlights: • Investigated the influence of doping agents on the potentiometric response • Reduction of the unit cell size affected directly in the potentiometric performance of the electrode • Sensor performance increased in the order: Ga 0.02 Mn 1.98 O 4 > Ga 0.02 Mn 1.98 O 3.98 S 0.02 > Ga 0.02 Mn 1.98 O 3.98 F 0.02 . - Abstract: This paper describes the development of a selective lithium ion sensor based on spinel-type manganese oxide doped with gallium and anions (Ga 0.02 Mn 1.98 O 3.98 X 0.02 , where X = S 2− and F − ). Investigation was made of the influence of cationic and/or anionic doping agents on the potentiometric response of the sensor. Experimental parameters evaluated included the effect of the lithium concentration on activation of the sensor by cyclic voltammetry, the pH of the electrolyte solution, and the selectivity towards Li + compared to other alkali and alkaline-earth metal ions. There was an important influence of the unit cell size of the material on the linear range, detection limit, and selectivity of the sensor. Reduction in the size of the tunnel for insertion of the lithium in the porous structure of the oxide directly affected the potentiometric performance of the electrode. Sensor performance increased in the order: Ga 0.02 Mn 1.98 O 4 > Ga 0.02 Mn 1.98 O 3.98 S 0.02 > Ga 0.02 Mn 1.98 O 3.98 F 0.02 . The observed super-Nernstian response could be explained by a mixed potential arising from two equilibria (redox and ion exchange) in the spinel-type manganese oxide. Sensitivity and the influence of pH on the electrode response were directly related to the doping agents present in the oxide structure

  12. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  13. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  14. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  15. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  16. Epidemiology in wonderland: Big Data and precision medicine.

    Science.gov (United States)

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  17. Making big sense from big data in toxicology by read-across.

    Science.gov (United States)

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  18. Randomised controlled trial of Alexander technique lessons, exercise, and massage (ATEAM) for chronic and recurrent back pain.

    Science.gov (United States)

    Little, Paul; Lewith, George; Webley, Fran; Evans, Maggie; Beattie, Angela; Middleton, Karen; Barnett, Jane; Ballard, Kathleen; Oxford, Frances; Smith, Peter; Yardley, Lucy; Hollinghurst, Sandra; Sharp, Debbie

    2008-08-19

    To determine the effectiveness of lessons in the Alexander technique, massage therapy, and advice from a doctor to take exercise (exercise prescription) along with nurse delivered behavioural counselling for patients with chronic or recurrent back pain. Factorial randomised trial. 64 general practices in England. 579 patients with chronic or recurrent low back pain; 144 were randomised to normal care, 147 to massage, 144 to six Alexander technique lessons, and 144 to 24 Alexander technique lessons; half of each of these groups were randomised to exercise prescription. Normal care (control), six sessions of massage, six or 24 lessons on the Alexander technique, and prescription for exercise from a doctor with nurse delivered behavioural counselling. Roland Morris disability score (number of activities impaired by pain) and number of days in pain. Exercise and lessons in the Alexander technique, but not massage, remained effective at one year (compared with control Roland disability score 8.1: massage -0.58, 95% confidence interval -1.94 to 0.77, six lessons -1.40, -2.77 to -0.03, 24 lessons -3.4, -4.76 to -2.03, and exercise -1.29, -2.25 to -0.34). Exercise after six lessons achieved 72% of the effect of 24 lessons alone (Roland disability score -2.98 and -4.14, respectively). Number of days with back pain in the past four weeks was lower after lessons (compared with control median 21 days: 24 lessons -18, six lessons -10, massage -7) and quality of life improved significantly. No significant harms were reported. One to one lessons in the Alexander technique from registered teachers have long term benefits for patients with chronic back pain. Six lessons followed by exercise prescription were nearly as effective as 24 lessons. National Research Register N0028108728.

  19. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  20. Surgical excision versus imiquimod 5% cream for nodular and superficial basal-cell carcinoma (SINS): a multicentre, non-inferiority, randomised controlled trial.

    Science.gov (United States)

    Bath-Hextall, Fiona; Ozolins, Mara; Armstrong, Sarah J; Colver, Graham B; Perkins, William; Miller, Paul S J; Williams, Hywel C

    2014-01-01

    Basal-cell carcinoma is the most common form of skin cancer and its incidence is increasing worldwide. We aimed to assess the effectiveness of imiquimod cream versus surgical excision in patients with low-risk basal-cell carcinoma. We did a multicentre, parallel-group, pragmatic, non-inferiority, randomised controlled trial at 12 centres in the UK, in which patients were recruited between June 19, 2003, and Feb 22, 2007, with 3 year follow-up from June 26, 2006, to May 26, 2010. Participants of any age were eligible if they had histologically confirmed primary nodular or superficial basal-cell carcinoma at low-risk sites. We excluded patients with morphoeic or recurrent basal-cell carcinoma and those with Gorlin syndrome. Participants were randomly assigned (1:1) via computer-generated blocked randomisation, stratified by centre and tumour type, to receive either imiquimod 5% cream once daily for 6 weeks (superficial) or 12 weeks (nodular), or surgical excision with a 4 mm margin. The randomisation sequence was concealed from study investigators. Because of the nature of the interventions, masking of participants was not possible and masking of outcome assessors was only partly possible. The trial statistician was masked to allocation until all analyses had been done. The primary outcome was the proportion of participants with clinical success, defined as absence of initial treatment failure or signs of recurrence at 3 years from start of treatment. We used a prespecified non-inferiority margin of a relative risk (RR) of 0.87. Analysis was by a modified intention-to-treat population and per protocol. This study is registered as an International Standard Randomised Controlled Trial (ISRCTN48755084), and with ClinicalTrials.gov, number NCT00066872. 501 participants were randomly assigned to the imiquimod group (n=254) or the surgical excision group (n=247). At year 3, 401 (80%) patients were included in the modified intention-to-treat group. At 3 years, 178 (84%) of

  1. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  2. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  3. Big endothelin changes the cellular miRNA environment in TMOb osteoblasts and increases mineralization.

    Science.gov (United States)

    Johnson, Michael G; Kristianto, Jasmin; Yuan, Baozhi; Konicke, Kathryn; Blank, Robert

    2014-08-01

    Endothelin (ET1) promotes the growth of osteoblastic breast and prostate cancer metastases. Conversion of big ET1 to mature ET1, catalyzed primarily by endothelin converting enzyme 1 (ECE1), is necessary for ET1's biological activity. We previously identified the Ece1, locus as a positional candidate gene for a pleiotropic quantitative trait locus affecting femoral size, shape, mineralization, and biomechanical performance. We exposed TMOb osteoblasts continuously to 25 ng/ml big ET1. Cells were grown for 6 days in growth medium and then switched to mineralization medium for an additional 15 days with or without big ET1, by which time the TMOb cells form mineralized nodules. We quantified mineralization by alizarin red staining and analyzed levels of miRNAs known to affect osteogenesis. Micro RNA 126-3p was identified by search as a potential regulator of sclerostin (SOST) translation. TMOb cells exposed to big ET1 showed greater mineralization than control cells. Big ET1 repressed miRNAs targeting transcripts of osteogenic proteins. Big ET1 increased expression of miRNAs that target transcripts of proteins that inhibit osteogenesis. Big ET1 increased expression of 126-3p 121-fold versus control. To begin to assess the effect of big ET1 on SOST production we analyzed both SOST transcription and protein production with and without the presence of big ET1 demonstrating that transcription and translation were uncoupled. Our data show that big ET1 signaling promotes mineralization. Moreover, the results suggest that big ET1's osteogenic effects are potentially mediated through changes in miRNA expression, a previously unrecognized big ET1 osteogenic mechanism.

  4. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  5. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  6. Cost-effectiveness of treatment for alcohol problems: findings of the randomised UK Alcohol Treatment Trial (UKATT).

    OpenAIRE

    Heather, Nick; Copello, Alex; Godfrey, Christine; Hodgson, Ray; UKATT Research Team

    2005-01-01

    Objective \\ud To compare the cost effectiveness of social behaviour and network therapy, a new treatment for alcohol problems, with that of the proved motivational enhancement therapy. Design Cost effectiveness analysis alongside a pragmatic randomised trial. \\ud \\ud Setting \\ud Seven treatment sites around Birmingham, Cardiff, and Leeds. Participants 742 clients with alcohol problems; 617 (83.2%) were interviewed at 12 months and full economic data were obtained on 608 (98.5% of 617). Main e...

  7. Crystallization and preliminary crystallographic analysis of the fourth FAS1 domain of human BigH3

    International Nuclear Information System (INIS)

    Yoo, Ji-Ho; Kim, EungKweon; Kim, Jongsun; Cho, Hyun-Soo

    2007-01-01

    The crystallization and X-ray diffraction analysis of the fourth FAS1 domain of human BigH3 are reported. The protein BigH3 is a cell-adhesion molecule induced by transforming growth factor-β (TGF-β). It consists of four homologous repeat domains known as FAS1 domains; mutations in these domains have been linked to corneal dystrophy. The fourth FAS1 domain was expressed in Escherichia coli B834 (DE3) (a methionine auxotroph) and purified by DEAE anion-exchange and gel-filtration chromatography. The FAS1 domain was crystallized using the vapour-diffusion method. A SAD diffraction data set was collected to a resolution of 2.5 Å at 100 K. The crystal belonged to space group P6 1 or P6 5 and had two molecules per asymmetric unit, with unit-cell parameters a = b = 62.93, c = 143.27 Å, α = β = 90.0, γ = 120.0°

  8. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    Science.gov (United States)

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  9. 33 CFR 117.677 - Big Sunflower River.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Big Sunflower River. 117.677 Section 117.677 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Mississippi § 117.677 Big Sunflower River. The draw of...

  10. Data warehousing in the age of big data

    CERN Document Server

    Krishnan, Krish

    2013-01-01

    Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Expert author Krish Krishnan helps you make sense of how Big Data fits into the world of data warehousing in clear and concise detail. The book is presented in three distinct parts. Part 1 discusses Big Data, its technologies and use cases from early adopters. Part 2 addresses data warehousing, its shortcomings, and new architecture

  11. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  12. Intracameral bevacizumab as an adjunct to trabeculectomy: a 1-year prospective, randomised study.

    Science.gov (United States)

    Vandewalle, Evelien; Abegão Pinto, Luís; Van Bergen, Tine; Spielberg, Leigh; Fieuws, Steffen; Moons, Lieve; Spileers, Werner; Zeyen, Thierry; Stalmans, Ingeborg

    2014-01-01

    To investigate the efficacy and safety of a single intracameral bevacizumab injection to improve the outcome of trabeculectomy. A 12-month, prospective, randomised, double-masked, placebo-controlled trial. Patients with medically uncontrolled open-angle glaucoma scheduled for a primary trabeculectomy were recruited and randomised to receive 50 µL of either bevacizumab (1.25 mg) or placebo (balanced salt solution) peroperatively. Absolute success was defined as intraocular pressure (IOP) ≤18 mm Hg and >5 mm Hg with at least 30% reduction from baseline and no loss of light perception. Success through the use of additional medical and/or surgical IOP-lowering treatments was defined as qualified success. 138 patients completed a 12-month follow-up, 69 of whom were in the bevacizumab treated group. IOP at 1 year postoperatively was significantly lower than baseline (placebo: 25.6±9.9 mm Hg vs 11.5±3.9 mm Hg, p<0.01; bevacizumab: 24.8±8.1 mm Hg vs 11.9±3.8 mm Hg, p<0.01), with no difference between treatment groups (p=0.69). However, absolute success was higher in the bevacizumab group (71% vs 51%, p=0.02), with the need for IOP-lowering interventions (needlings) being lower in this group (12% vs 33%, p=0.003). Complication rates were low and comparable between groups. Peroperative administration of intracameral bevacizumab significantly reduces the need for additional interventions during the follow-up of patients undergoing trabeculectomy.

  13. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  14. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  15. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  16. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  17. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  18. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  19. High dose melphalan in the treatment of advanced neuroblastoma: results of a randomised trial (ENSG-1) by the European Neuroblastoma Study Group.

    Science.gov (United States)

    Pritchard, Jon; Cotterill, Simon J; Germond, Shirley M; Imeson, John; de Kraker, Jan; Jones, David R

    2005-04-01

    High dose myeloablative chemotherapy ("megatherapy"), with haematopoietic stem cell support, is now widely used to consolidate response to induction chemotherapy in patients with advanced neuroblastoma. In this study (European Neuroblastoma Study Group, ENSG1), the value of melphalan myeloablative "megatherapy" was evaluated in a randomised, multi-centre trial. Between 1982 and 1985, 167 children with stages IV and III neuroblastoma (123 stage IV > 1 year old at diagnosis and 44 stage III and stage IV from 6 to 12 months old at diagnosis) were treated with oncovin, cisplatin, epipodophyllotoxin, and cyclophosphamide (OPEC) induction chemotherapy every 3 weeks. After surgical excision of primary tumour, the 90 patients (69% of the total) who achieved complete response (CR) or good partial response (GPR) were eligible for randomisation either to high dose melphalan (180 mg per square meter) with autologous bone marrow support or to no further treatment. Sixty-five (72%) of eligible children were actually randomised and 21 of these patients were surviving at time of this analysis, with median follow-up from randomisation of 14.3 years. Five year event-free survival (EFS) was 38% (95% confidence interval (CI) 21-54%) in the melphalan-treated group and 27% (95% CI 12-42%) in the "no-melphalan" group. This difference was not statistically significant (P = 0.08, log rank test) but for the 48 randomised stage IV patients aged >1 year at diagnosis outcome was significantly better in the melphalan-treated group-5 year EFS 33% versus 17% (P = 0.01, log rank test). In this trial, high dose melphalan improved the length of EFS and overall survival of children with stage IV neuroblastoma >1 year of age who achieved CR or GPR after OPEC induction therapy and surgery. Multi-agent myeloablative regimens are now widely used as consolidation therapy for children with stage IV disease and in those with other disease stages when the MYCN gene copy number in tumour cells is amplified

  20. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  1. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  2. 36 CFR 9.8 - Use of water.

    Science.gov (United States)

    2010-07-01

    ... MANAGEMENT Mining and Mining Claims § 9.8 Use of water. (a) No operator may use for operations any water from... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Use of water. 9.8 Section 9.8... Regional Director. The Regional Director shall not approve a plan of operations requiring the use of water...

  3. Randomised clinical trial: efficacy and safety of vonoprazan vs. lansoprazole in patients with gastric or duodenal ulcers - results from two phase 3, non-inferiority randomised controlled trials.

    Science.gov (United States)

    Miwa, H; Uedo, N; Watari, J; Mori, Y; Sakurai, Y; Takanami, Y; Nishimura, A; Tatsumi, T; Sakaki, N

    2017-01-01

    Vonoprazan is a new potassium-competitive acid blocker for treatment of acid-related diseases. To conduct two randomised-controlled trials, to evaluate the non-inferiority of vonoprazan vs. lansoprazole, a proton pump inhibitor, for treatment of gastric ulcer (GU) or duodenal ulcer (DU). Patients aged ≥20 years with ≥1 endoscopically-confirmed GU or DU (≥5 mm white coating) were randomised 1:1 using double-dummy blinding to receive lansoprazole (30 mg) or vonoprazan (20 mg) for 8 (GU study) or 6 (DU study) weeks. The primary endpoint was the proportion of patients with endoscopically confirmed healed GU or DU. For GU, 93.5% (216/231) of vonoprazan-treated patients and 93.8% (211/225) of lansoprazole-treated patients achieved healed GU; non-inferiority of vonoprazan to lansoprazole was confirmed [difference = -0.3% (95% CI -4.750, 4.208); P = 0.0011]. For DU, 95.5% (170/178) of vonoprazan-treated patients and 98.3% (177/180) of lansoprazole-treated patients achieved healed DU; non-inferiority to lansoprazole was not confirmed [difference = -2.8% (95% CI -6.400, 0.745); P = 0.0654]. The incidences of treatment-emergent adverse events were slightly lower for GU and slightly higher for DU with vonoprazan than with lansoprazole. There was one death (subarachnoid haemorrhage) in the vonoprazan group (DU). The possibility of a relationship between this unexpected patient death and the study drug could not be ruled out. In both studies, increases in serum gastrin levels were greater in vonoprazan-treated vs. lansoprazole-treated patients; levels returned to baseline after treatment in both groups. Vonoprazan 20 mg has a similar tolerability profile to lansoprazole 30 mg and is non-inferior with respect to GU healing and has similar efficacy for DU healing. © 2016 Takeda Pharmaceutical Company, Ltd. Alimentary Pharmacology & Therapeutics published by John Wiley & Sons Ltd.

  4. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  5. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig proteins.

    Directory of Open Access Journals (Sweden)

    Rajeev Raman

    Full Text Available BACKGROUND: Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. PRINCIPAL FINDINGS: We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th (Lig A9 and 10(th repeats (Lig A10; and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon. All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm, probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. CONCLUSIONS: We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  6. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  7. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  8. Confirmatory Factor Analysis of the Delirium Rating Scale Revised-98 (DRS-R98).

    Science.gov (United States)

    Thurber, Steven; Kishi, Yasuhiro; Trzepacz, Paula T; Franco, Jose G; Meagher, David J; Lee, Yanghyun; Kim, Jeong-Lan; Furlanetto, Leticia M; Negreiros, Daniel; Huang, Ming-Chyi; Chen, Chun-Hsin; Kean, Jacob; Leonard, Maeve

    2015-01-01

    Principal components analysis applied to the Delirium Rating Scale-Revised-98 contributes to understanding the delirium construct. Using a multisite pooled international delirium database, the authors applied confirmatory factor analysis to Delirium Rating Scale-Revised-98 scores from 859 adult patients evaluated by delirium experts (delirium, N=516; nondelirium, N=343). Confirmatory factor analysis found all diagnostic features and core symptoms (cognitive, language, thought process, sleep-wake cycle, motor retardation), except motor agitation, loaded onto factor 1. Motor agitation loaded onto factor 2 with noncore symptoms (delusions, affective lability, and perceptual disturbances). Factor 1 loading supports delirium as a single construct, but when accompanied by psychosis, motor agitation's role may not be solely as a circadian activity indicator.

  9. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  10. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  11. The SIKS/BiGGrid Big Data Tutorial

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Lammerts, Evert; de Vries, A.P.

    2011-01-01

    The School for Information and Knowledge Systems SIKS and the Dutch e-science grid BiG Grid organized a new two-day tutorial on Big Data at the University of Twente on 30 November and 1 December 2011, just preceding the Dutch-Belgian Database Day. The tutorial is on top of some exciting new

  12. Overall survival benefit for sequential doxorubicin-docetaxel compared with concurrent doxorubicin and docetaxel in node-positive breast cancer--8-year results of the Breast International Group 02-98 phase III trial

    DEFF Research Database (Denmark)

    Oakman, C; Francis, P A; Crown, J

    2013-01-01

    Background In women with node-positive breast cancer, the Breast International Group (BIG) 02-98 tested the incorporation of docetaxel (Taxotere) into doxorubicin (Adriamycin)-based chemotherapy, and compared sequential and concurrent docetaxel. At 5 years, there was a trend for improved disease...

  13. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  14. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  15. 7 CFR 98.4 - Analytical methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  16. WE-H-BRB-00: Big Data in Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  17. WE-H-BRB-00: Big Data in Radiation Oncology

    International Nuclear Information System (INIS)

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  18. Excitation functions of the 98Mo+d reactions

    International Nuclear Information System (INIS)

    Zarubin, P.P.; Padalko, V.Yu.; Khrisanfov, Yu.V.; Lebedev, P.P.; Podkopaev, Yu.N.

    The excitation functions of the 98 Mo+d reactions were studied. The energy dependence of (d,p),(d,n) and (d,α) reactions was investigated by the activation analysis. The energies of deuterons in the range (6-12) MeV were determined by means of the aluminium filters. 98 Mo foils with surface densities of 1.02, 0.23 and 0.14 mgxcm -2 with 98 Mo enrichment of 94.1% were used as targets. The gamma spectra were measured by a Ge(Li) detector. The 98 Mo(d,p) 99 Mo reaction excitation function was determined via detection of 739 and 181 keV γ-radiation of 99 Mo (Tsub(1/2)=66.47h); 140 keV γ-radiation of 99 Tc (Tsub(1/2)=6h) was detected for the 98 Mo(d,n) 99 Tc reaction excitation function determination and 460, 568, 1091, 1200 and 1492 keV γ-quanta of 96 Nb (Tsub(1/2)=23.35h) - for the 98 Mo(d,α) 96 Nb reaction. In the excitation function the wide extremum was observed at Esub(d) approximately 10 MeV. The ratio of cross sections σsup(m)(d,n)/σ(d,p) on the 98 Mo target was determined. The ratio σsup(m)(d,n)/σ(d,p) was found to be decreasing function of the deuteron energy. The relative cross sections were determined with an accuracy of +-5%, while for the absolute values of cross sections the accuracy was +-15%

  19. 45 CFR 98.34 - Parental rights and responsibilities.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Parental rights and responsibilities. 98.34 Section 98.34 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Program Operations (Child Care Services)-Parental Rights and Responsibilities § 98.34...

  20. 45 CFR 98.65 - Audits and financial reporting.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Audits and financial reporting. 98.65 Section 98... DEVELOPMENT FUND Financial Management § 98.65 Audits and financial reporting. (a) Each Lead Agency shall have... independent standards. (g) The Secretary shall require financial reports as necessary. ...

  1. Keeping up with Big Data--Designing an Introductory Data Analytics Class

    Science.gov (United States)

    Hijazi, Sam

    2016-01-01

    Universities need to keep up with the demand of the business world when it comes to Big Data. The exponential increase in data has put additional demands on academia to meet the big gap in education. Business demand for Big Data has surpassed 1.9 million positions in 2015. Big Data, Business Intelligence, Data Analytics, and Data Mining are the…

  2. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  3. Competing with big business: a randomised experiment testing the effects of messages to promote alcohol and sugary drink control policy.

    Science.gov (United States)

    Scully, Maree; Brennan, Emily; Durkin, Sarah; Dixon, Helen; Wakefield, Melanie; Barry, Colleen L; Niederdeppe, Jeff

    2017-12-28

    Evidence-based policies encouraging healthy behaviours are often strongly opposed by well-funded industry groups. As public support is crucial for policy change, public health advocates need to be equipped with strategies to offset the impact of anti-policy messages. In this study, we aimed to investigate the effectiveness of theory-based public health advocacy messages in generating public support for sugary drink/alcohol policies (increased taxes; sport sponsorship bans) and improving resistance to subsequent anti-policy messages typical of the sugary drink/alcohol industry. We conducted a two-wave randomised online experiment assigning Australian adults to one of four health policies (sugary drink tax; sugary drink industry sports sponsorship ban; alcohol tax; alcohol industry sports sponsorship ban). Within each health policy, we randomised participants to one of five message conditions: (i) non-advocacy based message about the size and seriousness of the relevant health issue (control); (ii) standard pro-policy arguments alone; (iii) standard pro-policy arguments combined with an inoculation message (forewarning and directly refuting anti-policy arguments from the opposition); (iv) standard pro-policy arguments combined with a narrative message (a short, personal story about an individual's experience of the health issue); or (v) standard pro-policy arguments combined with a composite inoculation and narrative message. At time 1, we exposed participants (n = 6000) to their randomly assigned message. Around two weeks later, we re-contacted participants (n = 3285) and exposed them to an anti-policy message described as being from a representative of the sugary drink/alcohol industry. Generalised linear models tested for differences between conditions in policy support and anti-industry beliefs at both time points. Only the standard argument plus narrative message increased policy support relative to control at time 1. The standard argument plus narrative

  4. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  5. Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors.

    Science.gov (United States)

    Yelland, Lisa N; Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B

    2015-08-01

    The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Randomisation errors are almost inevitable and should be reported in trial publications. The

  6. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  7. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  8. Burnable absorber-integrated Guide Thimble (BigT) - 1. Design concepts and neutronic characterization on the fuel assembly benchmarks

    International Nuclear Information System (INIS)

    Yahya, Mohd-Syukri; Yu, Hwanyeal; Kim, Yonghee

    2016-01-01

    This paper presents the conceptual designs of a new burnable absorber (BA) for the pressurized water reactor (PWR), which is named 'Burnable absorber-integrated Guide Thimble' (BigT). The BigT integrates BA materials into standard guide thimble in a PWR fuel assembly. Neutronic sensitivities and practical design considerations of the BigT concept are points of highlight in the first half of the paper. Specifically, the BigT concepts are characterized in view of its BA material and spatial self-shielding variations. In addition, the BigT replaceability requirement, bottom-end design specifications and thermal-hydraulic considerations are also deliberated. Meanwhile, much of the second half of the paper is devoted to demonstrate practical viability of the BigT absorbers via comparative evaluations against the conventional BA technologies in representative 17x17 and 16x16 fuel assembly lattices. For the 17x17 lattice evaluations, all three BigT variants are benchmarked against Westinghouse's existing BA technologies, while in the 16x16 assembly analyses, the BigT designs are compared against traditional integral gadolinia-urania rod design. All analyses clearly show that the BigT absorbers perform as well as the commercial BA technologies in terms of reactivity and power peaking management. In addition, it has been shown that sufficiently high control rod worth can be obtained with the BigT absorbers in place. All neutronic simulations were completed using the Monte Carlo Serpent code with ENDF/B-VII.0 library. (author)

  9. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  10. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  11. Big data has big potential for applications to climate change adaptation

    NARCIS (Netherlands)

    Ford, James D.; Tilleard, Simon E.; Berrang-Ford, Lea; Araos, Malcolm; Biesbroek, Robbert; Lesnikowski, Alexandra C.; MacDonald, Graham K.; Hsu, Angel; Chen, Chen; Bizikova, Livia

    2016-01-01

    The capacity to collect and analyze massive amounts
    of data is transforming research in the natural and social
    sciences (1). And yet, the climate change adaptation
    community has largely overlooked these developments.
    Here, we examine how “big data” can inform adaptation
    research

  12. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  13. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  14. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  15. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  16. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  17. Toward a manifesto for the 'public understanding of big data'.

    Science.gov (United States)

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  18. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  19. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  20. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  1. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  2. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  3. 45 CFR 98.3 - Effect on State law.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Effect on State law. 98.3 Section 98.3 Public... Goals, Purposes and Definitions § 98.3 Effect on State law. (a) Nothing in the Act or this part shall be construed to supersede or modify any provision of a State constitution or State law that prohibits the...

  4. Novel glucose-sensing technology and hypoglycaemia in type 1 diabetes: a multicentre, non-masked, randomised controlled trial.

    Science.gov (United States)

    Bolinder, Jan; Antuna, Ramiro; Geelhoed-Duijvestijn, Petronella; Kröger, Jens; Weitgasser, Raimund

    2016-11-05

    Tight control of blood glucose in type 1 diabetes delays onset of macrovascular and microvascular diabetic complications; however, glucose levels need to be closely monitored to prevent hypoglycaemia. We aimed to assess whether a factory-calibrated, sensor-based, flash glucose-monitoring system compared with self-monitored glucose testing reduced exposure to hypoglycaemia in patients with type 1 diabetes. In this multicentre, prospective, non-masked, randomised controlled trial, we enrolled adult patients with well controlled type 1 diabetes (HbA 1c ≤58 mmol/mol [7·5%]) from 23 European diabetes centres. After 2 weeks of all participants wearing the blinded sensor, those with readings for at least 50% of the period were randomly assigned (1:1) to flash sensor-based glucose monitoring (intervention group) or to self-monitoring of blood glucose with capillary strips (control group). Randomisation was done centrally using the biased-coin minimisation method dependent on study centre and type of insulin administration. Participants, investigators, and study staff were not masked to group allocation. The primary outcome was change in time in hypoglycaemia (diabetes spent in hypoglycaemia. Future studies are needed to assess the effectiveness of this technology in patients with less well controlled diabetes and in younger age groups. Abbott Diabetes Care. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  6. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  7. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  8. [Big Data and Public Health - Results of the Working Group 1 of the Forum Future Public Health, Berlin 2016].

    Science.gov (United States)

    Moebus, Susanne; Kuhn, Joseph; Hoffmann, Wolfgang

    2017-11-01

    Big Data is a diffuse term, which can be described as an approach to linking gigantic and often unstructured data sets. Big Data is used in many corporate areas. For Public Health (PH), however, Big Data is not a well-developed topic. In this article, Big Data is explained according to the intention of use, information efficiency, prediction and clustering. Using the example of application in science, patient care, equal opportunities and smart cities, typical challenges and open questions of Big Data for PH are outlined. In addition to the inevitable use of Big Data, networking is necessary, especially with knowledge-carriers and decision-makers from politics and health care practice. © Georg Thieme Verlag KG Stuttgart · New York.

  9. Self-managed loaded exercise versus usual physiotherapy treatment for rotator cuff tendinopathy: a pilot randomised controlled trial.

    Science.gov (United States)

    Littlewood, Chris; Malliaras, Peter; Mawson, Sue; May, Stephen; Walters, Stephen J

    2014-03-01

    Rotator cuff tendinopathy is a common source of shoulder pain characterised by persistent and/or recurrent problems for a proportion of sufferers. The aim of this study was to pilot the methods proposed to conduct a substantive study to evaluate the effectiveness of a self-managed loaded exercise programme versus usual physiotherapy treatment for rotator cuff tendinopathy. A single-centre pragmatic unblinded parallel group pilot randomised controlled trial. One private physiotherapy clinic, northern England. Twenty-four participants with rotator cuff tendinopathy. The intervention was a programme of self-managed loaded exercise. The control group received usual physiotherapy treatment. Baseline assessment comprised the Shoulder Pain and Disability Index (SPADI) and the Short-Form 36, repeated three months post randomisation. The recruitment target was met and the majority of participants (98%) were willing to be randomised. 100% retention was attained with all participants completing the SPADI at three months. Exercise adherence rates were excellent (90%). The mean change in SPADI score was -23.7 (95% CI -14.4 to -33.3) points for the self-managed exercise group and -19.0 (95% CI -6.0 to -31.9) points for the usual physiotherapy treatment group. The difference in three month SPADI scores was 0.1 (95% CI -16.6 to 16.9) points in favour of the usual physiotherapy treatment group. In keeping with previous research which indicates the need for further evaluation of self-managed loaded exercise for rotator cuff tendinopathy, these methods and the preliminary evaluation of outcome offer a foundation and stimulus to conduct a substantive study. Copyright © 2013 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  10. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  11. Prediction of difficult mask ventilation using a systematic assessment of risk factors vs. existing practice - a cluster randomised clinical trial in 94,006 patients

    DEFF Research Database (Denmark)

    Nørskov, A K; Wetterslev, J; Rosenstock, C V

    2017-01-01

    We compared implementation of systematic airway assessment with existing practice of airway assessment on prediction of difficult mask ventilation. Twenty-six departments were cluster-randomised to assess eleven risk factors for difficult airway management (intervention) or to continue with their......We compared implementation of systematic airway assessment with existing practice of airway assessment on prediction of difficult mask ventilation. Twenty-six departments were cluster-randomised to assess eleven risk factors for difficult airway management (intervention) or to continue...... with their existing airway assessment (control). In both groups, patients predicted as a difficult mask ventilation and/or difficult intubation were registered in the Danish Anaesthesia Database, with a notational summary of airway management. The trial's primary outcome was the respective incidence of unpredicted...... difficult and easy mask ventilation in the two groups. Among 94,006 patients undergoing mask ventilation, the incidence of unpredicted difficult mask ventilation in the intervention group was 0.91% and 0.88% in the control group; (OR) 0.98 (95% CI 0.66-1.44), p = 0.90. The incidence of patients predicted...

  12. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  13. Exploring the BIG Five of e-leadership by developing digital strategies with mobile, cloud, big data, social media, and the Internet of things

    NARCIS (Netherlands)

    Spil, Ton; Pris, Miran; Kijl, Björn

    2017-01-01

    Recent developments in digital technologies -- 1) mobile technology, 2) cloud computing, 3) big data, 4) social media, and 5) the Internet of Things -- show that these so-called BIG Five technologies are increasingly transforming organizations by enabling business innovation. This paper aims to

  14. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  15. Physical properties of superbulky lanthanide metallocenes: synthesis and extraordinary luminescence of [Eu(II)(Cp(BIG))2] (Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl).

    Science.gov (United States)

    Harder, Sjoerd; Naglav, Dominik; Ruspic, Christian; Wickleder, Claudia; Adlung, Matthias; Hermes, Wilfried; Eul, Matthias; Pöttgen, Rainer; Rego, Daniel B; Poineau, Frederic; Czerwinski, Kenneth R; Herber, Rolfe H; Nowik, Israel

    2013-09-09

    strong orange photoluminescence (quantum yield: 45 %): excitation at 412 nm (24,270 cm(-1)) gives a symmetrical single band in the emission spectrum at 606 nm (νmax =16495 cm(-1), FWHM: 2090 cm(-1), Stokes-shift: 2140 cm(-1)), which is assigned to a 4f(6)5d(1) → 4f(7) transition of Eu(II). These remarkable values compare well to those for Eu(II)-doped ionic host lattices and are likely caused by the rigidity of the [Eu(Cp(BIG))2] complex. Sharp emission signals, typical for Eu(III), are not visible. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  17. 9 CFR 98.36 - Animal semen from Canada.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Animal semen from Canada. 98.36... CERTAIN ANIMAL EMBRYOS AND ANIMAL SEMEN Certain Animal Semen § 98.36 Animal semen from Canada. (a) General importation requirements for animal semen from Canada. If the product is . . . Then . . . (1) Equine semen...

  18. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  19. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  20. Plasma urate concentration and risk of coronary heart disease: a Mendelian randomisation analysis

    Science.gov (United States)

    White, Jon; Sofat, Reecha; Hemani, Gibran; Shah, Tina; Engmann, Jorgen; Dale, Caroline; Shah, Sonia; Kruger, Felix A; Giambartolomei, Claudia; Swerdlow, Daniel I; Palmer, Tom; McLachlan, Stela; Langenberg, Claudia; Zabaneh, Delilah; Lovering, Ruth; Cavadino, Alana; Jefferis, Barbara; Finan, Chris; Wong, Andrew; Amuzu, Antoinette; Ong, Ken; Gaunt, Tom R; Warren, Helen; Davies, Teri-Louise; Drenos, Fotios; Cooper, Jackie; Ebrahim, Shah; Lawlor, Debbie A; Talmud, Philippa J; Humphries, Steve E; Power, Christine; Hypponen, Elina; Richards, Marcus; Hardy, Rebecca; Kuh, Diana; Wareham, Nicholas; Ben-Shlomo, Yoav; Day, Ian N; Whincup, Peter; Morris, Richard; Strachan, Mark W J; Price, Jacqueline; Kumari, Meena; Kivimaki, Mika; Plagnol, Vincent; Whittaker, John C; Smith, George Davey; Dudbridge, Frank; Casas, Juan P; Holmes, Michael V; Hingorani, Aroon D

    2016-01-01

    Summary Background Increased circulating plasma urate concentration is associated with an increased risk of coronary heart disease, but the extent of any causative effect of urate on risk of coronary heart disease is still unclear. In this study, we aimed to clarify any causal role of urate on coronary heart disease risk using Mendelian randomisation analysis. Methods We first did a fixed-effects meta-analysis of the observational association of plasma urate and risk of coronary heart disease. We then used a conventional Mendelian randomisation approach to investigate the causal relevance using a genetic instrument based on 31 urate-associated single nucleotide polymorphisms (SNPs). To account for potential pleiotropic associations of certain SNPs with risk factors other than urate, we additionally did both a multivariable Mendelian randomisation analysis, in which the genetic associations of SNPs with systolic and diastolic blood pressure, HDL cholesterol, and triglycerides were included as covariates, and an Egger Mendelian randomisation (MR-Egger) analysis to estimate a causal effect accounting for unmeasured pleiotropy. Findings In the meta-analysis of 17 prospective observational studies (166 486 individuals; 9784 coronary heart disease events) a 1 SD higher urate concentration was associated with an odds ratio (OR) for coronary heart disease of 1·07 (95% CI 1·04–1·10). The corresponding OR estimates from the conventional, multivariable adjusted, and Egger Mendelian randomisation analysis (58 studies; 198 598 individuals; 65 877 events) were 1·18 (95% CI 1·08–1·29), 1·10 (1·00–1·22), and 1·05 (0·92–1·20), respectively, per 1 SD increment in plasma urate. Interpretation Conventional and multivariate Mendelian randomisation analysis implicates a causal role for urate in the development of coronary heart disease, but these estimates might be inflated by hidden pleiotropy. Egger Mendelian randomisation analysis, which accounts for

  1. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  2. D-branes in a big bang/big crunch universe: Nappi-Witten gauged WZW model

    Energy Technology Data Exchange (ETDEWEB)

    Hikida, Yasuaki [School of Physics and BK-21 Physics Division, Seoul National University, Seoul 151-747 (Korea, Republic of); Nayak, Rashmi R. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' ' , Rome 00133 (Italy); Panigrahi, Kamal L. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' , Rome 00133 (Italy)

    2005-05-01

    We study D-branes in the Nappi-Witten model, which is a gauged WZW model based on (SL(2,R) x SU(2))/(U(1) x U(1)). The model describes a four dimensional space-time consisting of cosmological regions with big bang/big crunch singularities and static regions with closed time-like curves. The aim of this paper is to investigate by D-brane probes whether there are pathologies associated with the cosmological singularities and the closed time-like curves. We first classify D-branes in a group theoretical way, and then examine DBI actions for effective theories on the D-branes. In particular, we show that D-brane metric from the DBI action does not include singularities, and wave functions on the D-branes are well behaved even in the presence of closed time-like curves.

  3. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  4. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  5. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  6. Gabapentin for the Management of Chronic Pelvic Pain in Women (GaPP1: A Pilot Randomised Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Steff C Lewis

    Full Text Available Chronic pelvic pain (CPP affects 2.1-24% of women. Frequently, no underlying pathology is identified, and the pain is difficult to manage. Gabapentin is prescribed for CPP despite no robust evidence of efficacy. We performed a pilot trial in two UK centres to inform the planning of a future multicentre RCT to evaluate gabapentin in CPP management. Our primary objective was to determine levels of participant recruitment and retention. Secondary objectives included estimating potential effectiveness, acceptability to participants of trial methodology, and cost-effectiveness of gabapentin. Women with CPP and no obvious pelvic pathology were assigned to an increasing regimen of gabapentin (300-2700mg daily or placebo. We calculated the proportion of eligible women randomised, and of randomised participants who were followed up to six months. The analyses by treatment group were by intention-to-treat. Interviews were conducted to evaluate women's experiences of the trial. A probabilistic decision analytical model was used to estimate cost-effectiveness. Between September 2012-2013, 47 women (34% of those eligible were randomised (22 to gabapentin, 25 to placebo, and 25 (53% completed six-month follow-up. Participants on gabapentin had less pain (BPI difference 1.72 points, 95% CI:0.07-3.36, and an improvement in mood (HADS difference 4.35 points, 95% CI:1.97-6.73 at six months than those allocated placebo. The majority of participants described their trial experience favorably. At the UK threshold for willingness-to-pay, the probabilities of gabapentin or no treatment being cost-effective are similar. A pilot trial assessing gabapentin for CPP was feasible, but uncertainty remains, highlighting the need for a large definitive trial.

  7. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  8. Film Presentation: Big Bang, mes ancêtres et moi

    CERN Multimedia

    2010-01-01

    Big Bang, mes ancêtres et moi, by Franco-German TV producer ARTE (2009)   What do we know about the origins of the world today? This documentary presents a voyage into the mystery of these origins, accompanied by passionate scientists such as paleoanthropologist Pascal Picq, astrophysicist Hubert Reeves, physicist Etienne Klein and quantum gravity theorist Abhay Ashtekar. Organized around three key moments – the birth of the Universe, the appearance of life and the origins of mankind – this investigation takes us to various research areas around the world, including the large underground particle accelerator at CERN. The German version of this film, Big Bang im Labor, will be presented on 1st October. Big Bang, mes ancêtres et moi will be shown on Friday, 24 September from 13:00 to 14:00 in room 222-R-001 Language: French Big Bang im Labor will be shown on Friday, 1 October from 13:00 to 14:00 in the Main Auditorium Language : German   &nbs...

  9. Nucleoporin Nup98 mediates galectin-3 nuclear-cytoplasmic trafficking

    Energy Technology Data Exchange (ETDEWEB)

    Funasaka, Tatsuyoshi, E-mail: funasaka@staff.kanazawa-u.ac.jp [Laboratory of Molecular and Cellular Biology, Department of Biology, Faculty of Natural Systems, Institute of Science and Engineering, Kanazawa University, Ishikawa (Japan); Balan, Vitaly; Raz, Avraham [Department of Oncology and Pathology, Karmanos Cancer Institute, Wayne State University, School of Medicine, Detroit, MI (United States); Wong, Richard W., E-mail: rwong@staff.kanazawa-u.ac.jp [Laboratory of Molecular and Cellular Biology, Department of Biology, Faculty of Natural Systems, Institute of Science and Engineering, Kanazawa University, Ishikawa (Japan); Bio-AFM Frontier Research Center, Kanazawa Kanazawa University, Ishikawa (Japan)

    2013-04-26

    Highlights: •Nuclear pore protein Nup98 is a novel binding partner of galectin-3. •Nup98 transports galectin-3 into cytoplasm. •Nup98 depletion leads to galectin-3 nuclear transport and induces growth retardation. •Nup98 may involve in ß-catenin pathway through interaction with galectin-3. -- Abstract: Nucleoporin Nup98 is a component of the nuclear pore complex, and is important in transport across the nuclear pore. Many studies implicate nucleoporin in cancer progression, but no direct mechanistic studies of its effect in cancer have been reported. We show here that Nup98 specifically regulates nucleus–cytoplasm transport of galectin-3, which is a ß-galactoside-binding protein that affects adhesion, migration, and cancer progression, and controls cell growth through the ß-catenin signaling pathway in cancer cells. Nup98 interacted with galectin-3 on the nuclear membrane, and promoted galectin-3 cytoplasmic translocation whereas other nucleoporins did not show these functions. Inversely, silencing of Nup98 expression by siRNA technique localized galectin-3 to the nucleus and retarded cell growth, which was rescued by Nup98 transfection. In addition, Nup98 RNA interference significantly suppressed downstream mRNA expression in the ß-catenin pathway, such as cyclin D1 and FRA-1, while nuclear galectin-3 binds to ß-catenin to inhibit transcriptional activity. Reduced expression of ß-catenin target genes is consistent with the Nup98 reduction and the galectin-3–nucleus translocation rate. Overall, the results show Nup98’s involvement in nuclear–cytoplasm translocation of galectin-3 and ß-catenin signaling pathway in regulating cell proliferation, and the results depicted here suggest a novel therapeutic target/modality for cancers.

  10. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  11. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  12. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  13. Storage and Database Management for Big Data

    Science.gov (United States)

    2015-07-27

    cloud models that satisfy different problem 1.2. THE BIG DATA CHALLENGE 3 Enterprise Big Data - Interactive - On-demand - Virtualization - Java ...replication. Data loss can only occur if three drives fail prior to any one of the failures being corrected. Hadoop is written in Java and is installed in a...visible view into a dataset. There are many popular database management systems such as MySQL [4], PostgreSQL [63], and Oracle [5]. Most commonly

  14. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  15. The effects of BIG-3 on osteoblast differentiation are not dependent upon endogenously produced BMPs

    International Nuclear Information System (INIS)

    Gori, Francesca; Demay, Marie B.

    2005-01-01

    BMPs play an important role in both intramembranous and endochondral ossification. BIG-3, BMP-2-induced gene 3 kb, encodes a WD-40 repeat protein that accelerates the program of osteoblastic differentiation in vitro. To examine the potential interactions between BIG-3 and the BMP-2 pathway during osteoblastic differentiation, MC3T3-E1 cells stably transfected with BIG-3 (MC3T3E1-BIG-3), or with the empty vector (MC3T3E1-EV), were treated with noggin. Noggin treatment of pooled MC3T3E1-EV clones inhibited the differentiation-dependent increase in AP activity observed in the untreated MC3T3E1-EV clones but did not affect the increase in AP activity in the MC3T3E1-BIG-3 clones. Noggin treatment decreased the expression of Runx2 and type I collagen mRNAs and impaired mineralized matrix formation in MC3T3E1-EV clones but not in MC3T3E1-BIG-3 clones. To determine whether the actions of BIG-3 on osteoblast differentiation converged upon the BMP pathway or involved an alternate signaling pathway, Smad1 phosphorylation was examined. Basal phosphorylation of Smad1 was not altered in the MC3T3E1-BIG-3 clones. However, these clones did not exhibit the noggin-dependent decrease in phosphoSmad1 observed in the MC3T3E1-EV clones, nor did it decrease nuclear localization of phosphoSmad1. These observations suggest that BIG-3 accelerates osteoblast differentiation in MC3T3-E1 cells by inducing phosphorylation and nuclear translocation of Smad1 independently of endogenously produced BMPs

  16. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  17. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  18. Conditional deletion of CD98hc inhibits osteoclast development

    Directory of Open Access Journals (Sweden)

    Hideki Tsumura

    2016-03-01

    Full Text Available The CD98 heavy chain (CD98hc regulates virus-induced cell fusion and monocyte fusion, and is involved in amino acid transportation. Here, we examined the role that CD98hc plays in the formation of osteoclasts using CD98hcflox/floxLysM-cre peritoneal macrophages (CD98hc-defect macrophages. Peritoneal macrophages were stimulated with co-cultured with osteoblasts in the presence of 1,25(OH2 vitamin D3, and thereafter stained with tartrate-resistant acid phosphatase staining solution. The multinucleated osteoclast formation was severely impaired in the peritoneal macrophages isolated from the CD98hc-defect mice compared with those from wild-type mice. CD98hc mediates integrin signaling and amino acid transport through the CD98 light chain (CD98lc. In integrin signaling, suppression of the M-CSF-RANKL-induced phosphorylation of ERK, Akt, JNK and p130Cas were observed at the triggering phase in the CD98h-defect peritoneal macrophages. Moreover, we showed that the general control non-derepressible (GCN pathway, which was activated by amino acid starvation, was induced by the CD98hc-defect peritoneal macrophages stimulated with RANKL. These results indicate that CD98 plays two important roles in osteoclast formation through integrin signaling and amino acid transport.

  19. Presentation of Diagnostic Information to Doctors May Change Their Interpretation and Clinical Management: A Web-Based Randomised Controlled Trial.

    Science.gov (United States)

    Ben-Shlomo, Yoav; Collin, Simon M; Quekett, James; Sterne, Jonathan A C; Whiting, Penny

    2015-01-01

    There is little evidence on how best to present diagnostic information to doctors and whether this makes any difference to clinical management. We undertook a randomised controlled trial to see if different data presentations altered clinicians' decision to further investigate or treat a patient with a fictitious disorder ("Green syndrome") and their ability to determine post-test probability. We recruited doctors registered with the United Kingdom's largest online network for medical doctors between 10 July and 6" November 2012. Participants were randomised to one of four arms: (a) text summary of sensitivity and specificity, (b) Fagan's nomogram, (c) probability-modifying plot (PMP), (d) natural frequency tree (NFT). The main outcome measure was the decision whether to treat, not treat or undertake a brain biopsy on the hypothetical patient and the correct post-test probability. Secondary outcome measures included knowledge of diagnostic tests. 917 participants attempted the survey and complete data were available from 874 (95.3%). Doctors randomized to the PMP and NFT arms were more likely to treat the patient than those randomized to the text-only arm. (ORs 1.49, 95% CI 1.02, 2.16) and 1.43, 95% CI 0.98, 2.08 respectively). More patients randomized to the PMP (87/218-39.9%) and NFT (73/207-35.3%) arms than the nomogram (50/194-25.8%) or text only (30/255-11.8%) arms reported the correct post-test probability (p text summary of sensitivity and specificity or Fagan's nomogram.

  20. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  1. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  3. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  4. Big endotelina-1 e óxido nítrico em pacientes idosos hipertensos com e sem síndrome da apneia-hipopneia obstrutiva do sono

    Directory of Open Access Journals (Sweden)

    Iara Felicio Anunciato

    2013-10-01

    Full Text Available FUNDAMENTO: O papel do estresse oxidativo em pacientes idosos hipertensos com síndrome de apneia-hipopneia obstrutiva do sono (SAHOS é desconhecido. Objetivo: O objetivo foi avaliar os níveis de Big Endotelina-1 (Big ET-1 e Óxido Nítrico (NO em pacientes idosos hipertensos com e sem SAHOS moderada a grave. MÉTODOS: Os voluntários permaneceram internados durante 24 horas. Obtivemos os seguintes dados: índice de massa corporal (IMC, Monitorização Ambulatorial da Pressão Arterial (MAPA - 24 horas, e medicação atual. Sangue arterial foi coletado às 7:00 h e às 19:00 h para determinar níveis plasmáticos de NO e Big ET-1. A oximetria de pulso foi realizada durante o sono. A correlação de Pearson, Spearman e análise de variância univariada foram utilizadas para a análise estatística. RESULTADOS: Foram estudados 25 sujeitos com SAHOS (grupo 1 e 12 sem SAHOS (grupo 2, com idades de 67,0 ± 6,5 anos, 67,8 ± 6,8 anos, respectivamente. Não foram observadas diferenças significativas entre os grupos em IMC; no número de horas de sono; PA diastólica e sistólica em 24 h; PA de vigília; PA no sono; ou medicamentos usados para controlar a PA. Não foram detectadas diferenças nos níveis de NO e Big ET-1 plasmáticos às 19:00 h, mas às 7:00 h os níveis de de Big ET-1 foram mais altos (p = 0,03. No grupo 1, correlação negativa também foi observada entre a saturação de oxihemoglobina arterial média e a PA sistólica - 24 horas (p = 0,03, r = -0,44, e Big ET-1 (p = 0,04, r = 0,41. CONCLUSÕES: Na comparação entre idosos hipertensos com e sem SAHOS com PA e IMC semelhantes, observou-se níveis mais elevados de Big ET-1 após o sono no grupo SAHOS. Os níveis de NO não diferiram entre os pacientes hipertensos com ou sem SAHOS.

  5. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  6. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  7. Bygningers varmebehov 98, Bv98

    DEFF Research Database (Denmark)

    Bygningers energibehov og i SBI-anvisning 189 Småhuse. Programmet kan anvendes på pc under operativsystemet Microsoft Windows 95, Windows 98 og Windows NT 4.0, Servicepack 3 eller nyere. Pc-programmet er en revideret og moderniseret udgave af pc-programmet Bygningers Varmebehov, BV95, som SBI udgav i 1995....

  8. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Klinkby Madsen, Anders; Rasche, Andreas

    data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects......This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... of international development agendas to algorithms that synthesize large-scale data, (3) novel ways of rationalizing knowledge claims that underlie development efforts, and (4) shifts in professional and organizational identities of those concerned with producing and processing data for development. Our discussion...

  9. Autogenous bone versus deproteinised bovine bone matrix in 1-stage lateral sinus floor elevation in the severely atrophied maxilla: a randomised controlled trial.

    Science.gov (United States)

    Merli, Mauro; Moscatelli, Marco; Mariotti, Giorgia; Rotundo, Roberto; Nieri, Michele

    2013-01-01

    To compare 100% deproteinised bovine bone matrix (DBBM) grafts (test group) with 100% autogenous bone (AB) grafts (control group) for lateral maxillary sinus floor elevation in a parallel group, superiority, randomised controlled trial. Patients with 1 to 3 mm of residual bone height below the maxillary sinus were randomised for sinus floor elevation with DBBM and AB grafts and simultaneous implant placement. Randomisation was computer generated with allocation concealment by sealed envelopes and the radiographic examiner was blinded to group assignment. The abutment connection was performed 8 months after surgery and insertion of the provisional prostheses was performed 9 months after surgery. Outcome variables were implant failures, prosthetic failures, complications, chair time, postoperative pain and radiographic bone level 6 months after loading. Forty patients were randomised: 20 (32 implants) to the DBBM group and 20 (27 implants) to the AB group. One patient from the AB group dropped out. Two implant failures occurred in the DBBM group and no implant failure occurred in the AB group (P = 0.4872). All of the planned prostheses could be delivered. One complication occurred in the DBBM group and 2 in the AB group (P = 0.6050). Chair time was shorter for the DBBM group, with a difference of 27.3 minutes (P = 0.0428). Pain difference measured with a visual analogue scale for 6 days post-surgery was 0.2 in favour of the DBBM group (P = 0.6838). The difference in vertical bone height was 0.0 mm (95% CI -1.1, 1.1; P = 0.9703) and the difference in marginal bone level was 0.3 in favour of AB (95% CI -0.3, 0.9; P = 0.3220). No differences apart from chair time were observed when comparing DBBM and AB grafts with simultaneous implant placement in sinus elevation.

  10. Beyond the big five: the Dark Triad and the supernumerary personality inventory.

    Science.gov (United States)

    Veselka, Livia; Schermer, Julie Aitken; Vernon, Philip A

    2011-04-01

    The Dark Triad of personality, comprising Machiavellianism, narcissism, and psychopathy, was investigated in relation to the Supernumerary Personality Inventory (SPI) traits, because both sets of variables are predominantly distinct from the Big Five model of personality. Correlational and principal factor analyses were conducted to assess the relations between the Dark Triad and SPI traits. Multivariate behavioral genetic model-fitting analyses were also conducted to determine the correlated genetic and/or environmental underpinnings of the observed phenotypic correlations. Participants were 358 monozygotic and 98 same-sex dizygotic adult twin pairs from North America. As predicted, results revealed significant correlations between the Dark Triad and most SPI traits, and these correlations were primarily attributable to common genetic and non-shared environmental factors, except in the case of Machiavellianism, where shared environmental effects emerged. Three correlated factors were extracted during joint factor analysis of the Dark Triad and SPI traits, as well as a heritable general factor of personality - results that clarified the structure of the Dark Triad construct. It is concluded that the Dark Triad represents an exploitative and antisocial construct that extends beyond the Big Five model and shares a theoretical space with the SPI traits.

  11. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  12. Optimising the changing role of the community pharmacist: a randomised trial of the impact of audit and feedback

    Science.gov (United States)

    Winslade, Nancy; Eguale, Tewodros; Tamblyn, Robyn

    2016-01-01

    Objective To evaluate the impact of comparative performance feedback to community pharmacists on provision of professional services and the quality of patients’ medication use. Design Randomised, controlled, single-blind trial. Setting All 1833 community pharmacies in the Quebec province, Canada. Participants 1814 pharmacies not opting out and with more than 5 dispensings of the target medications during the 6-month baseline were randomised by a 2×2 factorial design to feedback first for hypertension adherence (907 control, 907 intervention) followed by randomisation for asthma adherence (791 control, 807 intervention). 1422 of 1814 pharmacies had complete information available during the follow-up for hypertension intervention (706 intervention, 716 control), and 1301 of 1598 had the follow-up information for asthma (657 intervention, 644 control). Intervention Using provincial billing data to measure performance, mailed comparative feedback reported the pharmacy-level percentage of dispensings to patients non-adherent to antihypertensive medications or overusing asthma rescue inhalers. Primary and secondary outcome measures The number of hypertension/asthma services billed per pharmacy and percentage of dispensings to non-adherent patients over the 12 months post intervention. Results Feedback on the asthma measure led to increased provision of asthma services (control 0.2, intervention 0.4, RR 1.58, 95% CI 1.02 to 2.46). However, this did not translate into reductions in patients’ overuse of rescue inhalers (control 45.5%, intervention 44.6%, RR 0.99, 95% CI 0.98 to 1.01). For non-adherence to antihypertensive medications, feedback resulted in no difference in either provision of hypertension services (control 0.7, intervention 0.8, RR 1.25, 95% CI 0.86 to 1.82) or antihypertensive treatment adherence (control 27.9%, intervention 28.0%, RR 1.0, 95% CI 0.99 to 1.00). Baseline performance did not influence results, and there was no evidence of a cumulative

  13. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  14. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  15. Infant wellbeing at 2 years of age in the Growth Restriction Intervention Trial (GRIT): multicentred randomised controlled trial.

    Science.gov (United States)

    Thornton, J G; Hornbuckle, J; Vail, A; Spiegelhalter, D J; Levene, M

    Although delivery is widely used for preterm babies failing to thrive in utero, the effect of altering delivery timing has never been assessed in a randomised controlled trial. We aimed to compare the effect of delivering early with delaying birth for as long as possible. 548 pregnant women were recruited by 69 hospitals in 13 European countries. Participants had fetal compromise between 24 and 36 weeks, an umbilical-artery doppler waveform recorded, and clinical uncertainty about whether immediate delivery was indicated. Before birth, 588 babies were randomly assigned to immediate delivery (n=296) or delayed delivery until the obstetrician was no longer uncertain (n=292). The main outcome was death or disability at or beyond 2 years of age. Disability was defined as a Griffiths developmental quotient of 70 or less or the presence of motor or perceptual severe disability. Analysis was by intention-to-treat. This trial has been assigned the International Standard Randomised Controlled Trial Number ISRCTN41358726. Primary outcomes were available on 290 (98%) immediate and 283 (97%) deferred deliveries. Overall rate of death or severe disability at 2 years was 55 (19%) of 290 immediate births, and 44 (16%) of 283 delayed births. With adjustment for gestational age and umbilical-artery doppler category, the odds ratio (95% CrI) was 1.1 (0.7-1.8). Most of the observed difference was in disability in babies younger than 31 weeks of gestation at randomisation: 14 (13%) immediate versus five (5%) delayed deliveries. No important differences in the median Griffiths developmental quotient in survivors was seen. The lack of difference in mortality suggests that obstetricians are delivering sick preterm babies at about the correct moment to minimise mortality. However, they could be delivering too early to minimise brain damage. These results do not lend support to the idea that obstetricians can deliver before terminal hypoxaemia to improve brain development.

  16. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  17. A numerical simulation of pre-big bang cosmology

    CERN Document Server

    Maharana, J P; Veneziano, Gabriele

    1998-01-01

    We analyse numerically the onset of pre-big bang inflation in an inhomogeneous, spherically symmetric Universe. Adding a small dilatonic perturbation to a trivial (Milne) background, we find that suitable regions of space undergo dilaton-driven inflation and quickly become spatially flat ($\\Omega \\to 1$). Numerical calculations are pushed close enough to the big bang singularity to allow cross checks against previously proposed analytic asymptotic solutions.

  18. Surface urban heat island across 419 global big cities.

    Science.gov (United States)

    Peng, Shushi; Piao, Shilong; Ciais, Philippe; Friedlingstein, Pierre; Ottle, Catherine; Bréon, François-Marie; Nan, Huijuan; Zhou, Liming; Myneni, Ranga B

    2012-01-17

    Urban heat island is among the most evident aspects of human impacts on the earth system. Here we assess the diurnal and seasonal variation of surface urban heat island intensity (SUHII) defined as the surface temperature difference between urban area and suburban area measured from the MODIS. Differences in SUHII are analyzed across 419 global big cities, and we assess several potential biophysical and socio-economic driving factors. Across the big cities, we show that the average annual daytime SUHII (1.5 ± 1.2 °C) is higher than the annual nighttime SUHII (1.1 ± 0.5 °C) (P < 0.001). But no correlation is found between daytime and nighttime SUHII across big cities (P = 0.84), suggesting different driving mechanisms between day and night. The distribution of nighttime SUHII correlates positively with the difference in albedo and nighttime light between urban area and suburban area, while the distribution of daytime SUHII correlates negatively across cities with the difference of vegetation cover and activity between urban and suburban areas. Our results emphasize the key role of vegetation feedbacks in attenuating SUHII of big cities during the day, in particular during the growing season, further highlighting that increasing urban vegetation cover could be one effective way to mitigate the urban heat island effect.

  19. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  20. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  1. Falls prevention and balance rehabilitation in multiple sclerosis: a bi-centre randomised controlled trial.

    Science.gov (United States)

    Cattaneo, Davide; Rasova, Kamila; Gervasoni, Elisa; Dobrovodská, Gabriela; Montesano, Angelo; Jonsdottir, Johanna

    2018-03-01

    People with Multiple Sclerosis (PwMS) have a high incidence of accidental falls that have a potentially detrimental effect on their daily life participation. The effect of balance specific rehabilitation on clinical balance measures and frequency of falls in PwMS was studied. A bi-centre randomised rater-blinded controlled trial. Participants in both groups received 20 treatment sessions. Participants in the intervention group received treatment aimed at improving balance and mobility. Participants in the control group received treatments to reduce limitations at activity and body function level. Primary measures were frequency of fallers (>1 fall in two months) and responders (>3 points improvement) at the Berg Balance Scale (BBS). Data was analysed according to an intention to treat approach. One hundred and nineteen participants were randomised. Following treatment frequency of fallers was 22% in the intervention group and 23% in the control group, odds ratio (OR) and (confidence limits): 1.05 (0.41 to 2.77). Responders on the BBS were 28% in the intervention group and 33% in the control group, OR = 0.75 (0.30 to 1.91). At follow up ORs for fallers and responders at BBS were 0.98 (0.48 to 2.01) and 0.79 (0.26 to 2.42), respectively. Twenty sessions 2-3 times/week of balance specific rehabilitation did not reduce fall frequency nor improve balance suggesting the need for more frequent and challenging interventions. Implications for Rehabilitation Programs for balance rehabilitation can improve balance but their effects in fall prevention are unclear. Twenty treatments sessions 2/3 times per week did not reduced frequency of falls in MS. The comparison with similar studies suggests that higher intensity of practice of highly challenging balance activities appears to be critical to maximizing effectiveness.

  2. Effect of candesartan on prevention (DIRECT-Prevent 1) and progression (DIRECT-Protect 1) of retinopathy in type 1 diabetes: randomised, placebo-controlled trials

    DEFF Research Database (Denmark)

    Chaturvedi, N.; Porta, M.; Klein, R.

    2008-01-01

    of retinopathy in type 1 diabetes. METHODS: Two randomised, double-blind, parallel-design, placebo-controlled trials were done in 309 centres worldwide. Participants with normotensive, normoalbuminuric type 1 diabetes without retinopathy were recruited to the DIRECT-Prevent 1 trial and those with existing...... retinopathy were recruited to DIRECT-Protect 1, and were assigned to candesartan 16 mg once a day or matching placebo. After 1 month, the dose was doubled to 32 mg. Investigators and participants were unaware of the treatment allocation status. The primary endpoints were incidence and progression......BACKGROUND: Results of previous studies suggest that renin-angiotensin system blockers might reduce the burden of diabetic retinopathy. We therefore designed the DIabetic REtinopathy Candesartan Trials (DIRECT) Programme to assess whether candesartan could reduce the incidence and progression...

  3. Incidence, prevalence, and mortality of insulin-dependent (type 1) diabetes mellitus in Lithuanian children during 1983-98

    DEFF Research Database (Denmark)

    Urbonaite, Brone; Zalinkevicius, Rimas; Green, Anders

    2002-01-01

    AIMS/HYPOTHESIS: Our purpose is to analyze interrelations of the incidence, prevalence and mortality of childhood-onset insulin-dependent diabetes mellitus (type 1) in Lithuania. METHODS: Incidence and prevalence rates were based on the national type 1 diabetes register during 1983-98. The cohort...

  4. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  5. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  6. Promoting public awareness of randomised clinical trials using the media: the 'Get Randomised' campaign.

    Science.gov (United States)

    Mackenzie, Isla S; Wei, Li; Rutherford, Daniel; Findlay, Evelyn A; Saywood, Wendy; Campbell, Marion K; Macdonald, Thomas M

    2010-02-01

    WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT * Recruitment is key to the success of clinical trials. * Many clinical trials fail to achieve adequate recruitment. * Public understanding and engagement in clinical research could be improved. WHAT THIS STUDY ADDS * 'Get Randomised' is the first campaign of its kind in the UK. * It is possible to improve public awareness of clinical research using the media. * Further work is needed to determine whether improved public awareness leads to increased participation in clinical research in the future. AIM To increase public awareness and understanding of clinical research in Scotland. METHODS A generic media campaign to raise public awareness of clinical research was launched in 2008. The 'Get Randomised' campaign was a Scotland-wide initiative led by the University of Dundee in collaboration with other Scottish universities. Television, radio and newspaper advertising showed leading clinical researchers, general practitioners and patients informing the public about the importance of randomised clinical trials (RCTs). 'Get Randomised' was the central message and interested individuals were directed to the http://www.getrandomised.org website for more information. To assess the impact of the campaign, cross-sectional surveys were conducted in representative samples of 1040 adults in Scotland prior to campaign launch and again 6 months later. RESULTS There was an improvement in public awareness of clinical trials following the campaign; 56.7% [95% confidence interval (CI) 51.8, 61.6] of the sample recalled seeing or hearing advertising about RCTs following the campaign compared with 14.8% (10.8, 18.9) prior to the campaign launch (difference = 41.4%; 95% CI for difference 35.6, 48.3; P advertising, 49% felt that the main message was that people should take part more in medical research. However, on whether they would personally take part in a clinical trial if asked, there was little difference in response following the campaign

  7. A randomised controlled trial of a cognitive behavioural intervention for women who have menopausal symptoms following breast cancer treatment (MENOS 1: Trial protocol

    Directory of Open Access Journals (Sweden)

    Hellier Jennifer

    2011-01-01

    Full Text Available Abstract Background This trial aims to evaluate the effectiveness of a group cognitive behavioural intervention to alleviate menopausal symptoms (hot flushes and night sweats in women who have had breast cancer treatment. Hot flushes and night sweats are highly prevalent but challenging to treat in this population. Cognitive behaviour therapy has been found to reduce these symptoms in well women and results of an exploratory trial suggest that it might be effective for breast cancer patients. Two hypotheses are tested: Compared to usual care, group cognitive behavioural therapy will: 1. Significantly reduce the problem rating and frequency of hot flushes and nights sweats after six weeks of treatment and at six months post-randomisation. 2. Improve mood and quality of life after six weeks of treatment and at six months post-randomisation. Methods/Design Ninety-six women who have completed their main treatment for breast cancer and who have been experiencing problematic hot flushes and night sweats for over two months are recruited into the trial from oncology and breast clinics in South East London. They are randomised to either six weekly group cognitive behavioural therapy (Group CBT sessions or to usual care. Group CBT includes information and discussion about hot flushes and night sweats in the context of breast cancer, monitoring and modifying precipitants, relaxation and paced respiration, stress management, cognitive therapy for unhelpful thoughts and beliefs, managing sleep and night sweats and maintaining changes. Prior to randomisation women attend a clinical interview, undergo 24-hour sternal skin conductance monitoring, and complete questionnaire measures of hot flushes and night sweats, mood, quality of life, hot flush beliefs and behaviours, optimism and somatic amplification. Post-treatment measures (sternal skin conductance and questionnaires are collected six to eight weeks later and follow-up measures (questionnaires and a use

  8. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  9. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  10. Phantom cosmology without Big Rip singularity

    Energy Technology Data Exchange (ETDEWEB)

    Astashenok, Artyom V. [Baltic Federal University of I. Kant, Department of Theoretical Physics, 236041, 14, Nevsky st., Kaliningrad (Russian Federation); Nojiri, Shin' ichi, E-mail: nojiri@phys.nagoya-u.ac.jp [Department of Physics, Nagoya University, Nagoya 464-8602 (Japan); Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Nagoya 464-8602 (Japan); Odintsov, Sergei D. [Department of Physics, Nagoya University, Nagoya 464-8602 (Japan); Institucio Catalana de Recerca i Estudis Avancats - ICREA and Institut de Ciencies de l' Espai (IEEC-CSIC), Campus UAB, Facultat de Ciencies, Torre C5-Par-2a pl, E-08193 Bellaterra (Barcelona) (Spain); Tomsk State Pedagogical University, Tomsk (Russian Federation); Yurov, Artyom V. [Baltic Federal University of I. Kant, Department of Theoretical Physics, 236041, 14, Nevsky st., Kaliningrad (Russian Federation)

    2012-03-23

    We construct phantom energy models with the equation of state parameter w which is less than -1, w<-1, but finite-time future singularity does not occur. Such models can be divided into two classes: (i) energy density increases with time ('phantom energy' without 'Big Rip' singularity) and (ii) energy density tends to constant value with time ('cosmological constant' with asymptotically de Sitter evolution). The disintegration of bound structure is confirmed in Little Rip cosmology. Surprisingly, we find that such disintegration (on example of Sun-Earth system) may occur even in asymptotically de Sitter phantom universe consistent with observational data. We also demonstrate that non-singular phantom models admit wormhole solutions as well as possibility of Big Trip via wormholes.

  11. Phantom cosmology without Big Rip singularity

    International Nuclear Information System (INIS)

    Astashenok, Artyom V.; Nojiri, Shin'ichi; Odintsov, Sergei D.; Yurov, Artyom V.

    2012-01-01

    We construct phantom energy models with the equation of state parameter w which is less than -1, w<-1, but finite-time future singularity does not occur. Such models can be divided into two classes: (i) energy density increases with time (“phantom energy” without “Big Rip” singularity) and (ii) energy density tends to constant value with time (“cosmological constant” with asymptotically de Sitter evolution). The disintegration of bound structure is confirmed in Little Rip cosmology. Surprisingly, we find that such disintegration (on example of Sun-Earth system) may occur even in asymptotically de Sitter phantom universe consistent with observational data. We also demonstrate that non-singular phantom models admit wormhole solutions as well as possibility of Big Trip via wormholes.

  12. Exploring the role of CT densitometry: a randomised study of augmentation therapy in alpha1-antitrypsin deficiency

    DEFF Research Database (Denmark)

    Dirksen, A; Piitulainen, E; Parr, D G

    2009-01-01

    for the assessment of the therapeutic effect of augmentation therapy in subjects with alpha(1)-antitrypsin (alpha(1)-AT) deficiency. In total, 77 subjects (protease inhibitor type Z) were randomised to weekly infusions of 60 mg x kg(-1) human alpha(1)-AT (Prolastin) or placebo for 2-2.5 yrs. The primary end...... was unaltered by treatment, but a reduction in exacerbation severity was observed. In patients with alpha(1)-AT deficiency, CT is a more sensitive outcome measure of emphysema-modifying therapy than physiology and health status, and demonstrates a trend of treatment benefit from alpha(1)-AT augmentation....

  13. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  14. Significance of Supply Logistics in Big Cities

    Directory of Open Access Journals (Sweden)

    Mario Šafran

    2012-10-01

    Full Text Available The paper considers the concept and importance of supplylogistics as element in improving storage, supply and transportof goods in big cities. There is always room for improvements inthis segmenl of economic activities, and therefore continuousoptimisation of the cargo flows from the manufacturer to theend user is impor1a11t. Due to complex requirements in thecargo supply a11d the "spoiled" end users, modem cities represe/ll great difficulties and a big challenge for the supply organisers.The consumers' needs in big cities have developed over therecent years i11 such a way that they require supply of goods severaltimes a day at precisely determined times (orders are receivedby e-mail, and the information transfer is therefore instantaneous.In order to successfully meet the consumers'needs in advanced economic systems, advanced methods ofgoods supply have been developed and improved, such as 'justin time'; ''door-to-door", and "desk-to-desk". Regular operationof these systems requires supply logistics 1vhiclz includes thetotalthroughpw of materials, from receiving the raw materialsor reproduction material to the delive1y of final products to theend users.

  15. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  16. Sandwich-type enzyme immunoassay for big endothelin-I in plasma: concentrations in healthy human subjects unaffected by sex or posture.

    Science.gov (United States)

    Aubin, P; Le Brun, G; Moldovan, F; Villette, J M; Créminon, C; Dumas, J; Homyrda, L; Soliman, H; Azizi, M; Fiet, J

    1997-01-01

    A sandwich-type enzyme immunoassay has been developed for measuring human big endothelin-1 (big ET-1) in human plasma and supernatant fluids from human cell cultures. Big ET-1 is the precursor of endothelin 1 (ET-1), the most potent vasoconstrictor known. A rabbit antibody raised against the big ET-1 COOH-terminus fragment was used as an immobilized antibody (anti-P16). The Fab' fragment of a monoclonal antibody (1B3) raised against the ET-1 loop fragment was used as the enzyme-labeled antibody, after being coupled to acetylcholinesterase. The lowest detectable value in the assay was 1.2 pg/mL (0.12 pg/well). The assay was highly specific for big ET-1, demonstrating no cross-reactivity with ET-1, big endothelin-2 (big ET-2), and big endothelin-3 (big ET-3). We used this assay to evaluate the effect of two different postural positions (supine and standing) on plasma big ET-1 concentrations in 11 male and 11 female healthy subjects. Data analysis revealed that neither sex nor body position influenced plasma big ET-1 concentrations. This assay should thus permit the detection of possible variations in plasma concentrations of big ET-1 in certain pathologies and, in association with ET-1 assay, make possible in vitro study of endothelin-converting enzyme activity in cell models. Such studies could clarify the physiological and clinical roles of this family of peptides.

  17. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  18. ENC 98

    International Nuclear Information System (INIS)

    1998-01-01

    This press dossier document reports on the main points presented by the Framatome group at the ENC'98 colloquium which took place in Nice (France) on September 25 1998. The summary comprises 4 main parts: the 1997-98 highlights (nuclear realizations in France, China and Turkey, nuclear services, nuclear fuels), the international activities in the nuclear domain (USA, China, Eastern Europe), the research and development activities (the French-German joint EPR project, the high temperature reactor (HTR) project), and the service and nuclear fuel activities (maintenance, improvements, control and expertise). (J.S.)

  19. Geohydrology of Big Bear Valley, California: phase 1--geologic framework, recharge, and preliminary assessment of the source and age of groundwater

    Science.gov (United States)

    Flint, Lorraine E.; Brandt, Justin; Christensen, Allen H.; Flint, Alan L.; Hevesi, Joseph A.; Jachens, Robert; Kulongoski, Justin T.; Martin, Peter; Sneed, Michelle

    2012-01-01

    The Big Bear Valley, located in the San Bernardino Mountains of southern California, has increased in population in recent years. Most of the water supply for the area is pumped from the alluvial deposits that form the Big Bear Valley groundwater basin. This study was conducted to better understand the thickness and structure of the groundwater basin in order to estimate the quantity and distribution of natural recharge to Big Bear Valley. A gravity survey was used to estimate the thickness of the alluvial deposits that form the Big Bear Valley groundwater basin. This determined that the alluvial deposits reach a maximum thickness of 1,500 to 2,000 feet beneath the center of Big Bear Lake and the area between Big Bear and Baldwin Lakes, and decrease to less than 500 feet thick beneath the eastern end of Big Bear Lake. Interferometric Synthetic Aperture Radar (InSAR) was used to measure pumping-induced land subsidence and to locate structures, such as faults, that could affect groundwater movement. The measurements indicated small amounts of land deformation (uplift and subsidence) in the area between Big Bear Lake and Baldwin Lake, the area near the city of Big Bear Lake, and the area near Sugarloaf, California. Both the gravity and InSAR measurements indicated the possible presence of subsurface faults in subbasins between Big Bear and Baldwin Lakes, but additional data are required for confirmation. The distribution and quantity of groundwater recharge in the area were evaluated by using a regional water-balance model (Basin Characterization Model, or BCM) and a daily rainfall-runoff model (INFILv3). The BCM calculated spatially distributed potential recharge in the study area of approximately 12,700 acre-feet per year (acre-ft/yr) of potential in-place recharge and 30,800 acre-ft/yr of potential runoff. Using the assumption that only 10 percent of the runoff becomes recharge, this approach indicated there is approximately 15,800 acre-ft/yr of total recharge in

  20. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  1. MapFactory - Towards a mapping design pattern for big geospatial data

    Science.gov (United States)

    Rautenbach, Victoria; Coetzee, Serena

    2018-05-01

    With big geospatial data emerging, cartographers and geographic information scientists have to find new ways of dealing with the volume, variety, velocity, and veracity (4Vs) of the data. This requires the development of tools that allow processing, filtering, analysing, and visualising of big data through multidisciplinary collaboration. In this paper, we present the MapFactory design pattern that will be used for the creation of different maps according to the (input) design specification for big geospatial data. The design specification is based on elements from ISO19115-1:2014 Geographic information - Metadata - Part 1: Fundamentals that would guide the design and development of the map or set of maps to be produced. The results of the exploratory research suggest that the MapFactory design pattern will help with software reuse and communication. The MapFactory design pattern will aid software developers to build the tools that are required to automate map making with big geospatial data. The resulting maps would assist cartographers and others to make sense of big geospatial data.

  2. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  3. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  4. Concurrence of big data analytics and healthcare: A systematic review.

    Science.gov (United States)

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  5. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  6. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  7. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  8. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  9. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  10. THE FASTEST OODA LOOP: THE IMPLICATIONS OF BIG DATA FOR AIR POWER

    Science.gov (United States)

    2016-06-01

    need for a human interpreter. Until the rise of Big Data , automated translation only had a “small” library of several million words to pull from and...AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY THE FASTEST OODA LOOP: THE IMPLICATIONS OF BIG DATA FOR AIR POWER by Aaron J. Dove, Maj, USAF A...1 Previous Academic Study....................................................................................................2 Why Big Data

  11. Adjusting for multiple prognostic factors in the analysis of randomised trials

    Science.gov (United States)

    2013-01-01

    Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not

  12. The complete mitochondrial genome of the cryptic "lineage B" big-fin reef squid, Sepioteuthis lessoniana (Cephalopoda: Loliginidae) in Indo-West Pacific.

    Science.gov (United States)

    Shen, Kang-Ning; Yen, Ta-Chi; Chen, Ching-Hung; Ye, Jeng-Jia; Hsiao, Chung-Der

    2016-05-01

    In this study, the complete mitogenome sequence of the cryptic "lineage B" big-fin reef squid, Sepioteuthis lessoniana (Cephalopoda: Loliginidae) has been sequenced by next-generation sequencing method. The assembled mitogenome consisting of 16,694 bp, includes 13 protein coding genes, 25 transfer RNAs, 2 ribosomal RNAs genes. The overall base composition of "lineage B" S. lessoniana is 36.7% for A, 18.9 % for C, 34.5 % for T and 9.8 % for G and show 90% identities to "lineage C" S. lessoniana. It is also exhibits high T + A content (71.2%), two non-coding regions with TA tandem repeats. The complete mitogenome of the cryptic "lineage B" S. lessoniana provides essential and important DNA molecular data for further phylogeography and evolutionary analysis for big-fin reef squid species complex.

  13. Financial disincentives? A three-armed randomised controlled trial of the effect of financial Incentives  in Diabetic Eye Assessment  by Screening (IDEAS) trial.

    Science.gov (United States)

    Judah, Gaby; Darzi, Ara; Vlaev, Ivo; Gunn, Laura; King, Derek; King, Dominic; Valabhji, Jonathan; Bicknell, Colin

    2018-05-23

    Conflicting evidence exists regarding the impact of financial incentives on encouraging attendance at medical screening appointments. The primary aim was to determine whether financial incentives increase attendance at diabetic eye screening in persistent non-attenders. A three-armed randomised controlled trial was conducted in London in 2015. 1051 participants aged over 16 years, who had not attended eye screening appointments for 2 years or more, were randomised (1.4:1:1 randomisation ratio) to receive the usual invitation letter (control), an offer of £10 cash for attending screening (fixed incentive) or a 1 in 100 chance of winning £1000 (lottery incentive) if they attend. The primary outcome was the proportion of invitees attending screening, and a comparative analysis was performed to assess group differences. Pairwise comparisons of attendance rates were performed, using a conservative Bonferroni correction for independent comparisons. 34/435 (7.8%) of control, 17/312 (5.5%) of fixed incentive and 10/304 (3.3%) of lottery incentive groups attended. Participants who received any incentive were significantly less likely to attend their appointment compared with controls (risk ratio (RR)=0.56; 95% CI 0.34 to 0.92). Those in the probabilistic incentive group (RR=0.42; 95% CI 0.18 to 0.98), but not the fixed incentive group (RR=1.66; 95% CI 0.65 to 4.21), were significantly less likely to attend than those in the control group. Financial incentives, particularly lottery-based incentives, attract fewer patients to diabetic eye screening than standard invites in this population. Financial incentives should not be used to promote screening unless tested in context, as they may negatively affect attendance rates. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Relationship between S. typhi R plasmid (pRST98) and macrophage apoptosis

    International Nuclear Information System (INIS)

    Song Guorong; Wu Shuyan; Li Yuanyuan; Lv Jie; Xu Yang; Huang Rui

    2008-01-01

    Objective: To study the relationship between S. typhi R plasmid (pR ST98 ) and macrophage apoptosis. Methods: pR ST98 was transferred into a less virulent strain of S. typhimurium for creating a transconjugant pR ST98 /RIA, the standard S. typhimurium virulence strain SR-11 was used as the positive control, and RIA as the negative one. Infection with murine macrophage J 774A.1 occurred separately under the same conditions. J 774A.1 apoptosis was detected by flow cytometry and TUNEL at 0, 2, 4, 6, 12, 24 hours respectively. Mitochondria membrane potential was detected by JC-1 staining method. Viable bacteria was detected by serial dilution at the same time and viable cells stained with Trypan blue were counted. Results: SR-11 results in a higher apoptosis in J 774A.1 than pR ST98 /RIA, and a combined pR ST98 /RIA higher than RIA (P pR ST98 /RIA>SR-11 (P ST98 could increase the macrophage apoptosis. (authors)

  15. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  16. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  17. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  18. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  19. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  20. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  1. 40 CFR 98.418 - Definitions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Definitions. 98.418 Section 98.418 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.418 Definitions. All terms used in...

  2. The Scandinavian Propaten(®) trial - 1-year patency of PTFE vascular prostheses with heparin-bonded luminal surfaces compared to ordinary pure PTFE vascular prostheses - a randomised clinical controlled multi-centre trial

    DEFF Research Database (Denmark)

    Lindholt, J S; Gottschalksen, B; Johannesen, N

    2011-01-01

    To compare 1-year potencies' of heparin-bonded PTFE [(Hb-PTFE) (Propaten(®))] grafts with those of ordinary polytetraflouroethylene (PTFE) grafts in a blinded, randomised, clinically controlled, multi-centre study.......To compare 1-year potencies' of heparin-bonded PTFE [(Hb-PTFE) (Propaten(®))] grafts with those of ordinary polytetraflouroethylene (PTFE) grafts in a blinded, randomised, clinically controlled, multi-centre study....

  3. Semantic Web Technologies and Big Data Infrastructures: SPARQL Federated Querying of Heterogeneous Big Data Stores

    OpenAIRE

    Konstantopoulos, Stasinos; Charalambidis, Angelos; Mouchakis, Giannis; Troumpoukis, Antonis; Jakobitsch, Jürgen; Karkaletsis, Vangelis

    2016-01-01

    The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges invo...

  4. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  5. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  6. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  7. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  8. A nomogram to estimate the HbA1c response to different DPP-4 inhibitors in type 2 diabetes: a systematic review and meta-analysis of 98 trials with 24 163 patients

    Science.gov (United States)

    Esposito, Katherine; Chiodini, Paolo; Maiorino, Maria Ida; Capuano, Annalisa; Cozzolino, Domenico; Petrizzo, Michela; Bellastella, Giuseppe; Giugliano, Dario

    2015-01-01

    Objectives To develop a nomogram for estimating the glycated haemoglobin (HbA1c) response to different dipeptidyl peptidase-4 (DPP-4) inhibitors in type 2 diabetes. Design A systematic review and meta-analysis of randomised controlled trials (RCTs) of DPP-4 inhibitors (vildagliptin, sitagliptin, saxagliptin, linagliptin and alogliptin) on HbA1c were conducted. Electronic searches were carried out up to December 2013. Trials were included if they were carried out on participants with type 2 diabetes, lasted at least 12 weeks, included at least 30 participants and had a final assessment of HbA1c. A random effect model was used to pool data. A nomogram was used to represent results of the metaregression model. Participants Adults with type 2 diabetes. Interventions Any DPP-4 inhibitor (vildagliptin, sitagliptin, saxagliptin, linagliptin or alogliptin). Outcome measures The HbA1c response to each DPP-4 inhibitor within 1 year of therapy. Results We screened 928 citations and reviewed 98 articles reporting 98 RCTs with 100 arms in 24 163 participants. There were 26 arms with vildagliptin, 37 with sitagliptin, 13 with saxagliptin, 13 with linagliptin and 11 with alogliptin. For all 100 arms, the mean baseline HbA1c value was 8.05% (64 mmol/mol); the decrease of HbA1c from baseline was −0.77% (95% CI −0.82 to −0.72%), with high heterogeneity (I2=96%). Multivariable metaregression model that included baseline HbA1c, type of DPP-4 inhibitor and fasting glucose explained 58% of variance between studies, with no significant interaction between them. Other factors, including age, previous diabetes drugs and duration of treatment added low predictive power (HbA1c reduction from baseline using the type of DPP-4 inhibitor, baseline values of HbA1c and fasting glucose. Conclusions Baseline HbA1c level and fasting glucose explain most of the variance in HbA1c change in response to DPP-4 inhibitors: each increase of 1.0% units HbA1c provides a 0.4–0.5% units greater

  9. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  10. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  11. Intelligent Test Mechanism Design of Worn Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available With the continuous development of national economy, big gear was widely applied in metallurgy and mine domains. So, big gear plays an important role in above domains. In practical production, big gear abrasion and breach take place often. It affects normal production and causes unnecessary economic loss. A kind of intelligent test method was put forward on worn big gear mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. The measure equations transformations were made on involute straight gear. Original polar coordinate equations were transformed into rectangular coordinate equations. Big gear abrasion measure principle was introduced. Detection principle diagram was given. Detection route realization method was introduced. OADM12 laser sensor was selected. Detection on big gear abrasion area was realized by detection mechanism. Tested data of unworn gear and worn gear were led in designed calculation program written by Visual Basic language. Big gear abrasion quantity can be obtained. It provides a feasible method for intelligent test and intelligent repair welding on worn big gear.

  12. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  13. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  14. Big Bang Titanic: New Dark Energy (Vacuum Gravity) Cosmic Model Emerges Upon Falsification of The Big Bang By Disproof of Its Central Assumptions

    Science.gov (United States)

    Gentry, Robert

    2011-04-01

    Physicists who identify the big bang with the early universe should have first noted from Hawking's A Brief History of Time, p. 42, that he ties Hubble's law to Doppler shifts from galaxy recession from a nearby center, not to bb's unvalidated and thus problematical expansion redshifts. Our PRL submission LJ12135 describes such a model, but in it Hubble's law is due to Doppler and vacuum gravity effects, the 2.73K CBR is vacuum gravity shifted blackbody cavity radiation from an outer galactic shell, and its (1 + z)-1 dilation and (M,z) relations closely fit high-z SNe Ia data; all this strongly implies our model's vacuum energy is the elusive dark energy. We also find GPS operation's GR effects falsify big bang's in-flight expansion redshift paradigm, and hence the big bang, by showing λ changes occur only at emission. Surprisingly we also discover big bang's CBR prediction is T 0, while galactic photons shrink dλ/dt < 0. Contrary to a PRL editor's claim, the above results show LJ12135 fits PRL guidelines for papers that replace established theories. For details see alphacosmos.net.

  15. Fisicos argentinos reproduciran el Big Bang

    CERN Multimedia

    De Ambrosio, Martin

    2008-01-01

    Two groups of argentine physicists from La Plata and Buenos Aires Universities work in a sery of experiments who while recreate the conditions of the big explosion that was at the origin of the universe. (1 page)

  16. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  17. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  18. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  19. Big Data in Medicine is Driving Big Changes

    Science.gov (United States)

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  20. Main Issues in Big Data Security

    Directory of Open Access Journals (Sweden)

    Julio Moreno

    2016-09-01

    Full Text Available Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.

  1. Antigen-based therapy with glutamic acid decarboxylase (GAD) vaccine in patients with recent-onset type 1 diabetes: a randomised double-blind trial.

    Science.gov (United States)

    Wherrett, Diane K; Bundy, Brian; Becker, Dorothy J; DiMeglio, Linda A; Gitelman, Stephen E; Goland, Robin; Gottlieb, Peter A; Greenbaum, Carla J; Herold, Kevan C; Marks, Jennifer B; Monzavi, Roshanak; Moran, Antoinette; Orban, Tihamer; Palmer, Jerry P; Raskin, Philip; Rodriguez, Henry; Schatz, Desmond; Wilson, Darrell M; Krischer, Jeffrey P; Skyler, Jay S

    2011-07-23

    Glutamic acid decarboxylase (GAD) is a major target of the autoimmune response that occurs in type 1 diabetes mellitus. In animal models of autoimmunity, treatment with a target antigen can modulate aggressive autoimmunity. We aimed to assess whether immunisation with GAD formulated with aluminum hydroxide (GAD-alum) would preserve insulin production in recent-onset type 1 diabetes. Patients aged 3-45 years who had been diagnosed with type 1 diabetes for less than 100 days were enrolled from 15 sites in the USA and Canada, and randomly assigned to receive one of three treatments: three injections of 20 μg GAD-alum, two injections of 20 μg GAD-alum and one of alum, or 3 injections of alum. Injections were given subcutaneously at baseline, 4 weeks later, and 8 weeks after the second injection. The randomisation sequence was computer generated at the TrialNet coordinating centre. Patients and study personnel were masked to treatment assignment. The primary outcome was the baseline-adjusted geometric mean area under the curve (AUC) of serum C-peptide during the first 2 h of a 4-h mixed meal tolerance test at 1 year. Secondary outcomes included changes in glycated haemoglobin A(1c) (HbA(1c)) and insulin dose, and safety. Analysis included all randomised patients with known measurements. This trial is registered with ClinicalTrials.gov, number NCT00529399. 145 patients were enrolled and treated with GAD-alum (n=48), GAD-alum plus alum (n=49), or alum (n=48). At 1 year, the 2-h AUC of C-peptide, adjusted for age, sex, and baseline C-peptide value, was 0·412 nmol/L (95% CI 0·349-0·478) in the GAD-alum group, 0·382 nmol/L (0·322-0·446) in the GAD-alum plus alum group, and 0·413 nmol/L (0·351-0·477) in the alum group. The ratio of the population mean of the adjusted geometric mean 2-h AUC of C-peptide was 0·998 (95% CI 0·779-1·22; p=0·98) for GAD-alum versus alum, and 0·926 (0·720-1·13; p=0·50) for GAD-alum plus alum versus alum. HbA(1c), insulin use, and

  2. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    Science.gov (United States)

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  3. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  4. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  5. Big Data in Action for Government : Big Data Innovation in Public Services, Policy, and Engagement

    OpenAIRE

    World Bank

    2017-01-01

    Governments have an opportunity to harness big data solutions to improve productivity, performance and innovation in service delivery and policymaking processes. In developing countries, governments have an opportunity to adopt big data solutions and leapfrog traditional administrative approaches

  6. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Science.gov (United States)

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  7. [Telephone support for breastfeeding by primary care: a randomised multicentre trial].

    Science.gov (United States)

    Balaguer Martínez, Josep Vicent; Valcarce Pérez, Inmaculada; Esquivel Ojeda, Jessica Noelia; Hernández Gil, Alicia; Martín Jiménez, María Del Pilar; Bernad Albareda, Mercè

    2018-03-22

    To evaluate a telephone support programme for mothers who breastfeed for the first 6 months. A randomised unmasked clinical trial was conducted in 5 urban Primary Care centres that included mothers with healthy newborns who were breastfeeding exclusively (EBF) or partially (PBF). The control group received the usual care. The intervention group also received telephone support for breastfeeding on a weekly basis for the first 2months and then every 2weeks until the sixth month. The type of breastfeeding was recorded in the usual check-up visit (1, 2, 4 and 6 months). The study included 193 patients in the intervention group, and 187 in a control group. The greatest increase in the percentage of EBF was observed at 6 months: 21.4% in the control group compared to 30.1% in the intervention group. However, in the adjusted odds ratios analysis, confidence intervals did not show statistical significance. The odds ratio at 1 month, 2 months, 4 months, and 6 months for EBF were 1.45 (0.91-2.31), 1.35 (0.87-2.08), 1.21 (0.80-1.81), and 1.58 (0.99-2.53), respectively. The odds ratio in the same age groups for any type of breastfeeding (EBF + PBF) were 1.65 (0.39-7.00), 2.08 (0.94-4.61), 1.37 (0.79-2.38), and 1.60 (0.98-2.61), respectively. Telephone intervention was not effective enough to generalise it. Copyright © 2018. Publicado por Elsevier España, S.L.U.

  8. Small data, data infrastructures and big data (Working Paper 1)

    OpenAIRE

    Kitchin, Rob; Lauriault, Tracey P.

    2014-01-01

    The production of academic knowledge has progressed for the past few centuries using small data studies characterized by sampled data generated to answer specific questions. It is a strategy that has been remarkably successful, enabling the sciences, social sciences and humanities to advance in leaps and bounds. This approach is presently being challenged by the development of big data. Small data studies will, however, continue to be important in the future because of their utility in answer...

  9. Experimental Observation of Anisotropic Adler-Bell-Jackiw Anomaly in Type-II Weyl Semimetal WTe1.98 Crystals at the Quasiclassical Regime

    Science.gov (United States)

    Lv, Yang-Yang; Li, Xiao; Zhang, Bin-Bin; Deng, W. Y.; Yao, Shu-Hua; Chen, Y. B.; Zhou, Jian; Zhang, Shan-Tao; Lu, Ming-Hui; Zhang, Lei; Tian, Mingliang; Sheng, L.; Chen, Yan-Feng

    2017-03-01

    The asymmetric electron dispersion in type-II Weyl semimetal theoretically hosts anisotropic transport properties. Here, we observe the significant anisotropic Adler-Bell-Jackiw (ABJ) anomaly in the Fermi-level delicately adjusted WTe1.98 crystals. Quantitatively, CW , a coefficient representing the intensity of the ABJ anomaly along the a and b axis of WTe1.98 are 0.030 and 0.051 T-2 at 2 K, respectively. We found that the temperature-sensitive ABJ anomaly is attributed to a topological phase transition from a type-II Weyl semimetal to a trivial semimetal, which is verified by a first-principles calculation using experimentally determined lattice parameters at different temperatures. Theoretical electrical transport study reveals that the observation of an anisotropic ABJ along both the a and b axes in WTe1.98 is attributed to electrical transport in the quasiclassical regime. Our work may suggest that electron-doped WTe2 is an ideal playground to explore the novel properties in type-II Weyl semimetals.

  10. Hip fracture risk in relation to vitamin D supplementation and serum 25-hydroxyvitamin D levels: a systematic review and meta-analysis of randomised controlled trials and observational studies

    Directory of Open Access Journals (Sweden)

    Roddam Andrew W

    2010-06-01

    Full Text Available Abstract Background Vitamin D supplementation for fracture prevention is widespread despite conflicting interpretation of relevant randomised controlled trial (RCT evidence. This study summarises quantitatively the current evidence from RCTs and observational studies regarding vitamin D, parathyroid hormone (PTH and hip fracture risk. Methods We undertook separate meta-analyses of RCTs examining vitamin D supplementation and hip fracture, and observational studies of serum vitamin D status (25-hydroxyvitamin D (25(OHD level, PTH and hip fracture. Results from RCTs were combined using the reported hazard ratios/relative risks (RR. Results from case-control studies were combined using the ratio of 25(OHD and PTH measurements of hip fracture cases compared with controls. Original published studies of vitamin D, PTH and hip fracture were identified through PubMed and Web of Science databases, searches of reference lists and forward citations of key papers. Results The seven eligible RCTs identified showed no significant difference in hip fracture risk in those randomised to cholecalciferol or ergocalciferol supplementation versus placebo/control (RR = 1.13[95%CI 0.98-1.29]; 801 cases, with no significant difference between trials of 21 (heterogeneity = 51.02, p 216 (heterogeneity = 137.9, p 29 (heterogeneity = 149.68, p Conclusions Neither higher nor lower dose vitamin D supplementation prevented hip fracture. Randomised and observational data on vitamin D and hip fracture appear to differ. The reason for this is unclear; one possible explanation is uncontrolled confounding in observational studies. Post-fracture PTH levels are unrelated to hip fracture risk.

  11. New 'bigs' in cosmology

    International Nuclear Information System (INIS)

    Yurov, Artyom V.; Martin-Moruno, Prado; Gonzalez-Diaz, Pedro F.

    2006-01-01

    This paper contains a detailed discussion on new cosmic solutions describing the early and late evolution of a universe that is filled with a kind of dark energy that may or may not satisfy the energy conditions. The main distinctive property of the resulting space-times is that they make to appear twice the single singular events predicted by the corresponding quintessential (phantom) models in a manner which can be made symmetric with respect to the origin of cosmic time. Thus, big bang and big rip singularity are shown to take place twice, one on the positive branch of time and the other on the negative one. We have also considered dark energy and phantom energy accretion onto black holes and wormholes in the context of these new cosmic solutions. It is seen that the space-times of these holes would then undergo swelling processes leading to big trip and big hole events taking place on distinct epochs along the evolution of the universe. In this way, the possibility is considered that the past and future be connected in a non-paradoxical manner in the universes described by means of the new symmetric solutions

  12. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  13. 49 CFR 98.10 - Appeal.

    Science.gov (United States)

    2010-10-01

    ... Administration of Enforcement Proceedings § 98.10 Appeal. (a) Within 30 working days after receipt of a decision issued under § 98.8 or § 98.9 of this part, either the Departmental counsel or the former employee may appeal the decision to the Secretary. (b) In making a decision on an appeal, the Secretary shall consider...

  14. Cosmic relics from the big bang

    International Nuclear Information System (INIS)

    Hall, L.J.

    1988-12-01

    A brief introduction to the big bang picture of the early universe is given. Dark matter is discussed; particularly its implications for elementary particle physics. A classification scheme for dark matter relics is given. 21 refs., 11 figs., 1 tab

  15. Cosmic relics from the big bang

    Energy Technology Data Exchange (ETDEWEB)

    Hall, L.J.

    1988-12-01

    A brief introduction to the big bang picture of the early universe is given. Dark matter is discussed; particularly its implications for elementary particle physics. A classification scheme for dark matter relics is given. 21 refs., 11 figs., 1 tab.

  16. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  17. Propeptide big-endothelin, N-terminal-pro brain natriuretic peptide and mortality. The Ludwigshafen risk and cardiovascular health (LURIC) study.

    Science.gov (United States)

    Gergei, Ingrid; Krämer, Bernhard K; Scharnagl, Hubert; Stojakovic, Tatjana; März, Winfried; Mondorf, Ulrich

    The endothelin system (Big-ET-1) is a key regulator in cardiovascular (CV) disease and congestive heart failure (CHF). We have examined the incremental value of Big-ET-1 in predicting total and CV mortality next to the well-established CV risk marker N-Terminal Pro-B-Type Natriuretic Peptide (NT-proBNP). Big-ET-1 and NT-proBNP were determined in 2829 participants referred for coronary angiography (follow-up 9.9 years). Big-ET-1 is an independent predictor of total, CV mortality and death due to CHF. The conjunct use of Big-ET-1 and NT-proBNP improves the risk stratification of patients with intermediate to high risk of CV death and CHF. Big-ET-1improves risk stratification in patients referred for coronary angiography.

  18. BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION

    International Nuclear Information System (INIS)

    Bertulani, C. A.; Fuqua, J.; Hussein, M. S.

    2013-01-01

    The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1 - 0.12 +0.05 , indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1) is highly unlikely.

  19. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  20. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  1. Big Data and Analytics in Healthcare.

    Science.gov (United States)

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  2. Big Data for Business Ecosystem Players

    Directory of Open Access Journals (Sweden)

    Perko Igor

    2016-06-01

    Full Text Available In the provided research, some of the Big Data most prospective usage domains connect with distinguished player groups found in the business ecosystem. Literature analysis is used to identify the state of the art of Big Data related research in the major domains of its use-namely, individual marketing, health treatment, work opportunities, financial services, and security enforcement. System theory was used to identify business ecosystem major player types disrupted by Big Data: individuals, small and mid-sized enterprises, large organizations, information providers, and regulators. Relationships between the domains and players were explained through new Big Data opportunities and threats and by players’ responsive strategies. System dynamics was used to visualize relationships in the provided model.

  3. "Big data" in economic history.

    Science.gov (United States)

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  4. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  5. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  6. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  7. BIG DATA IN TAMIL: OPPORTUNITIES, BENEFITS AND CHALLENGES

    OpenAIRE

    R.S. Vignesh Raj; Babak Khazaei; Ashik Ali

    2015-01-01

    This paper gives an overall introduction on big data and has tried to introduce Big Data in Tamil. It discusses the potential opportunities, benefits and likely challenges from a very Tamil and Tamil Nadu perspective. The paper has also made original contribution by proposing the ‘big data’s’ terminology in Tamil. The paper further suggests a few areas to explore using big data Tamil on the lines of the Tamil Nadu Government ‘vision 2023’. Whilst, big data has something to offer everyone, it ...

  8. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Investigating Seed Longevity of Big Sagebrush (Artemisia tridentata)

    Science.gov (United States)

    Wijayratne, Upekala C.; Pyke, David A.

    2009-01-01

    The Intermountain West is dominated by big sagebrush communities (Artemisia tridentata subspecies) that provide habitat and forage for wildlife, prevent erosion, and are economically important to recreation and livestock industries. The two most prominent subspecies of big sagebrush in this region are Wyoming big sagebrush (A. t. ssp. wyomingensis) and mountain big sagebrush (A. t. ssp. vaseyana). Increased understanding of seed bank dynamics will assist with sustainable management and persistence of sagebrush communities. For example, mountain big sagebrush may be subjected to shorter fire return intervals and prescribed fire is a tool used often to rejuvenate stands and reduce tree (Juniperus sp. or Pinus sp.) encroachment into these communities. A persistent seed bank for mountain big sagebrush would be advantageous under these circumstances. Laboratory germination trials indicate that seed dormancy in big sagebrush may be habitat-specific, with collections from colder sites being more dormant. Our objective was to investigate seed longevity of both subspecies by evaluating viability of seeds in the field with a seed retrieval experiment and sampling for seeds in situ. We chose six study sites for each subspecies. These sites were dispersed across eastern Oregon, southern Idaho, northwestern Utah, and eastern Nevada. Ninety-six polyester mesh bags, each containing 100 seeds of a subspecies, were placed at each site during November 2006. Seed bags were placed in three locations: (1) at the soil surface above litter, (2) on the soil surface beneath litter, and (3) 3 cm below the soil surface to determine whether dormancy is affected by continued darkness or environmental conditions. Subsets of seeds were examined in April and November in both 2007 and 2008 to determine seed viability dynamics. Seed bank samples were taken at each site, separated into litter and soil fractions, and assessed for number of germinable seeds in a greenhouse. Community composition data

  10. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  11. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  12. How Big Data Reshapes Knowledge for International Development

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    The aim of this paper is conceptualize and illustrate how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. Based on a review of relevant literature on the uses of big data in the context of development, we unpack how...... digital traces from cell phone data, social media data or data from internet searches are used as sources of knowledge in this area. We draw on insights from governmentality studies and argue that big data’s impact on how relevant development problems are governed revolves around (1) new techniques...... of visualizing development issues, (2) a reliance on algorithmic operations that synthesize large-scale data, (3) and novel ways of rationalizing the knowledge claims that underlie development efforts. Our discussion shows that the reliance on big data challenges some aspects of traditional ways to collect...

  13. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  14. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  16. Big Data Technologies

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  17. Ecological Health and Water Quality Assessments in Big Creek Lake, AL

    Science.gov (United States)

    Childs, L. M.; Frey, J. W.; Jones, J. B.; Maki, A. E.; Brozen, M. W.; Malik, S.; Allain, M.; Mitchell, B.; Batina, M.; Brooks, A. O.

    2008-12-01

    Big Creek Lake (aka J.B. Converse Reservoir) serves as the water supply for the majority of residents in Mobile County, Alabama. The area surrounding the reservoir serves as a gopher tortoise mitigation bank and is protected from further development, however, impacts from previous disasters and construction have greatly impacted the Big Creek Lake area. The Escatawpa Watershed drains into the lake, and of the seven drainage streams, three have received a 303 (d) (impaired water bodies) designation in the past. In the adjacent ecosystem, the forest is experiencing major stress from drought and pine bark beetle infestations. Various agencies are using control methods such as pesticide treatment to eradicate the beetles. There are many concerns about these control methods and the run-off into the ecosystem. In addition to pesticide control methods, the Highway 98 construction projects cross the north area of the lake. The community has expressed concern about both direct and indirect impacts of these construction projects on the lake. This project addresses concerns about water quality, increasing drought in the Southeastern U.S., forest health as it relates to vegetation stress, and state and federal needs for improved assessment methods supported by remotely sensed data to determine coastal forest susceptibility to pine bark beetles. Landsat TM, ASTER, MODIS, and EO-1/ALI imagery was employed in Normalized Difference Vegetation Index (NDVI) and Normalized Difference Moisture Index (NDMI), as well as to detect concentration of suspended solids, chlorophyll and water turbidity. This study utilizes NASA Earth Observation Systems to determine how environmental conditions and human activity relate to pine tree stress and the onset of pine beetle invasion, as well as relate current water quality data to community concerns and gain a better understanding of human impacts upon water resources.

  18. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    Science.gov (United States)

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  19. Randomised trial of biofeedback training for encopresis

    NARCIS (Netherlands)

    van der Plas, R. N.; Benninga, M. A.; Redekop, W. K.; Taminiau, J. A.; Büller, H. A.

    1996-01-01

    To evaluate biofeedback training in children with encopresis and the effect on psychosocial function. Prospective controlled randomised study. PATIENT INTERVENTIONS: A multimodal treatment of six weeks. Children were randomised into two groups. Each group received dietary and toilet advice, enemas,

  20. Supervised physical exercise during pregnancy improves health perception. Randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Mireia Peláez

    2013-07-01

    Full Text Available Aim: To investigate the influence of a moderate exercise program during pregnancy on the maternal health perception. Methods. A randomised controlled trial was performed. 101 primiparous women were allocated into the control group (CG, n=51 and the exercise group (EG, n=50, lost to follow-up 13, 11.4%. Women on the EC were asked to participate in a supervised exercise program during from 10-14 to 36 weeks of gestation (70-75 sessions, 55-60 min/session, three times per week. Women in the CG received usual care. All women were asked to fill in validated questionnaires about health perception and urinary incontinence before and after the intervention. Results. At the end of the intervention there were statistically significant differences between groups on health perception [very good: EG 35, 70% vs. CG 5, 9.8%; good: EG 15, 30% vs. CG 16, 31%; average EG 0, 0.0% vs. CG 26, 51%; poor EG 0, 0.0% vs. CG 3, 5.9%; very poor: EG 0, 0.0% vs. CG 1, 2% (p<.001]. What is more, women in the EG showed less urinary incontinence [ICIQ-SF Score EG 0.30 (SD1.3 vs. CG 3.1 (SD4.1, p<.001]. Conclusions. A supervised physical exercise program during pregnancy which includes pelvic floor muscle training, improves health perception and it is effective on primary prevention of urinary incontinence.

  1. Increasing seed size and quality by manipulating BIG SEEDS1 in legume species.

    Science.gov (United States)

    Ge, Liangfa; Yu, Jianbin; Wang, Hongliang; Luth, Diane; Bai, Guihua; Wang, Kan; Chen, Rujin

    2016-11-01

    Plant organs, such as seeds, are primary sources of food for both humans and animals. Seed size is one of the major agronomic traits that have been selected in crop plants during their domestication. Legume seeds are a major source of dietary proteins and oils. Here, we report a conserved role for the BIG SEEDS1 (BS1) gene in the control of seed size and weight in the model legume Medicago truncatula and the grain legume soybean (Glycine max). BS1 encodes a plant-specific transcription regulator and plays a key role in the control of the size of plant organs, including seeds, seed pods, and leaves, through a regulatory module that targets primary cell proliferation. Importantly, down-regulation of BS1 orthologs in soybean by an artificial microRNA significantly increased soybean seed size, weight, and amino acid content. Our results provide a strategy for the increase in yield and seed quality in legumes.

  2. Patient satisfaction with occlusal scheme of conventional complete dentures: A randomised clinical trial (part I).

    Science.gov (United States)

    Moradpoor, H; Arabzade Hoseini, M; Savabi, O; Shirani, M

    2018-01-01

    Occlusal scheme can affect denture retention, stability, occlusal force distribution, aesthetics, masticatory function, patient comfort and general patient satisfaction with dentures. This study aimed to compare the patient satisfaction with 3 types of complete denture occlusion including fully bilateral balanced occlusion (FBBO), newly presented buccalised occlusion (BO) and lingualised occlusion (LO). In this parallel randomised clinical trial, new conventional complete dentures were fabricated for 86 volunteers. Participants were randomly allocated to 3 groups with 3 different occlusal schemes. All patients were recalled at 1 and 3 months after delivery for data collection. The 19-item version of Oral Health Impact Profile for Edentulous Patients questionnaire was used in this study. The visual analogue scale (VAS) was used for assessment of the prosthodontist's attitude towards denture quality, patient's attitude towards different occlusal schemes and evaluation of patient satisfaction. Data were analysed using the Wilcoxon signed rank test, the Kruskal-Wallis test and the post hoc Dunn test via SPSS version 18.0 (P ≤ .05). Eighty-six patients completed the study, and their data were analysed (mean age ± standard deviation = 57.78 ± 9.98 years). The only significant difference when comparing the 3 groups was physical pain, which was significantly higher in FBBO group. No significant differences were found for the VAS scores of patient and prosthodontist satisfaction or the domain scores among the 3 occlusal schemes either at 1 or at 3 months post-delivery. The VAS score of patient satisfaction and prosthodontist satisfaction increased at third compared to first month after delivery. The results of this randomised clinical trial provided evidence that BO is as effective as LO for the fabrication of complete dentures. © 2017 John Wiley & Sons Ltd.

  3. Traffic information computing platform for big data

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn; Liu, Yan, E-mail: ztduan@chd.edu.cn; Dai, Jiting, E-mail: ztduan@chd.edu.cn; Kang, Jun, E-mail: ztduan@chd.edu.cn [Chang' an University School of Information Engineering, Xi' an, China and Shaanxi Engineering and Technical Research Center for Road and Traffic Detection, Xi' an (China)

    2014-10-06

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  4. Traffic information computing platform for big data

    International Nuclear Information System (INIS)

    Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun

    2014-01-01

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users

  5. Fremtidens landbrug bliver big business

    DEFF Research Database (Denmark)

    Hansen, Henning Otte

    2016-01-01

    Landbrugets omverdensforhold og konkurrencevilkår ændres, og det vil nødvendiggøre en udvikling i retning af “big business“, hvor landbrugene bliver endnu større, mere industrialiserede og koncentrerede. Big business bliver en dominerende udvikling i dansk landbrug - men ikke den eneste...

  6. Quantum nature of the big bang.

    Science.gov (United States)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  7. A randomised controlled trial evaluating IGF1 titration in contrast to current GH dosing strategies in children born small for gestational age

    DEFF Research Database (Denmark)

    Jensen, Rikke Beck; Thankamony, Ajay; O'Connell, Susan M

    2014-01-01

    BACKGROUND: Short children born small for gestational age (SGA) are treated with a GH dose based on body size, but treatment may lead to high levels of IGF1. The objective was to evaluate IGF1 titration of GH dose in contrast to current dosing strategies. METHODS: In the North European Small......-for-Gestational-Age Study (NESGAS), 92 short pre-pubertal children born SGA were randomised after 1 year of high-dose GH treatment (67 μg/kg per day) to three different regimens: high dose (67 μg/kg per day), low dose (35 μg/kg per day) or IGF1 titration. RESULTS: The average dose during the second year of the randomised...... trial did not differ between the IGF1 titration group (38 μg/kg per day, s.d. 0.019) and the low-dose group (35 μg/kg per day, s.d. 0.002; P=0.46), but there was a wide variation in the IGF1 titration group (range 10-80 μg/kg per day). The IGF1 titration group had significantly lower height gain (0...

  8. Up-skilling associate clinicians in Malawi in emergency obstetric, neonatal care and clinical leadership: the ETATMBA cluster randomised controlled trial.

    Science.gov (United States)

    Ellard, David R; Chimwaza, Wanangwa; Davies, David; Simkiss, Doug; Kamwendo, Francis; Mhango, Chisale; Quenby, Siobhan; Kandala, Ngianga-Bakwin; O'Hare, Joseph Paul

    2016-01-01

    The ETATMBA (Enhancing Training And Technology for Mothers and Babies in Africa) project-trained associate clinicians (ACs/clinical officers) as advanced clinical leaders in emergency obstetric and neonatal care. This trial aimed to evaluate the impact of training on obstetric health outcomes in Malawi. A cluster randomised controlled trial with 14 districts of Malawi (8 intervention, 6 control) as units of randomisation. Intervention districts housed the 46 ACs who received the training programme. The primary outcome was district (health facility-based) perinatal mortality rates. Secondary outcomes included maternal mortality ratios, neonatal mortality rate, obstetric and birth variables. The study period was 2011-2013. Mortality rates/ratios were examined using an interrupted time series (ITS) to identify trends over time. The ITS reveals an improving trend in perinatal mortality across both groups, but better in the control group (intervention, effect -3.58, SE 2.65, CI (-9.85 to 2.69), p=0.20; control, effect -17.79, SE 6.83, CI (-33.95 to -1.64), p=0.03). Maternal mortality ratios are seen to have improved in intervention districts while worsening in the control districts (intervention, effect -38.11, SE 50.30, CI (-157.06 to 80.84), p=0.47; control, effect 11.55, SE 87.72, CI (-195.87 to 218.98), p=0.90). There was a 31% drop in neonatal mortality rate in intervention districts while in control districts, the rate rises by 2%. There are no significant differences in the other secondary outcomes. This is one of the first randomised studies looking at the effect of structured training on health outcomes in this setting. Notwithstanding a number of limitations, this study suggests that up-skilling this cadre is possible, and could impact positively on health outcomes. ISRCTN63294155; Results.

  9. Challenges and Opportunities of Big Data in Health Care: A Systematic Review.

    Science.gov (United States)

    Kruse, Clemens Scott; Goswamy, Rishi; Raval, Yesha; Marawi, Sarah

    2016-11-21

    Big data analytics offers promise in many business sectors, and health care is looking at big data to provide answers to many age-related issues, particularly dementia and chronic disease management. The purpose of this review was to summarize the challenges faced by big data analytics and the opportunities that big data opens in health care. A total of 3 searches were performed for publications between January 1, 2010 and January 1, 2016 (PubMed/MEDLINE, CINAHL, and Google Scholar), and an assessment was made on content germane to big data in health care. From the results of the searches in research databases and Google Scholar (N=28), the authors summarized content and identified 9 and 14 themes under the categories Challenges and Opportunities, respectively. We rank-ordered and analyzed the themes based on the frequency of occurrence. The top challenges were issues of data structure, security, data standardization, storage and transfers, and managerial skills such as data governance. The top opportunities revealed were quality improvement, population management and health, early detection of disease, data quality, structure, and accessibility, improved decision making, and cost reduction. Big data analytics has the potential for positive impact and global implications; however, it must overcome some legitimate obstacles. ©Clemens Scott Kruse, Rishi Goswamy, Yesha Raval, Sarah Marawi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 21.11.2016.

  10. Big data processing in the cloud - Challenges and platforms

    Science.gov (United States)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  11. Ethics and Epistemology in Big Data Research.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  12. Victoria Stodden: Scholarly Communication in the Era of Big Data and Big Computation

    OpenAIRE

    Stodden, Victoria

    2015-01-01

    Victoria Stodden gave the keynote address for Open Access Week 2015. "Scholarly communication in the era of big data and big computation" was sponsored by the University Libraries, Computational Modeling and Data Analytics, the Department of Computer Science, the Department of Statistics, the Laboratory for Interdisciplinary Statistical Analysis (LISA), and the Virginia Bioinformatics Institute. Victoria Stodden is an associate professor in the Graduate School of Library and Information Scien...

  13. Big Data: Concept, Potentialities and Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Fernando Almeida

    2018-03-01

    Full Text Available The evolution of information systems and the growth in the use of the Internet and social networks has caused an explosion in the amount of available data relevant to the activities of the companies. Therefore, the treatment of these available data is vital to support operational, tactical and strategic decisions. This paper aims to present the concept of big data and the main technologies that support the analysis of large data volumes. The potential of big data is explored considering nine sectors of activity, such as financial, retail, healthcare, transports, agriculture, energy, manufacturing, public, and media and entertainment. In addition, the main current opportunities, vulnerabilities and privacy challenges of big data are discussed. It was possible to conclude that despite the potential for using the big data to grow in the previously identified areas, there are still some challenges that need to be considered and mitigated, namely the privacy of information, the existence of qualified human resources to work with Big Data and the promotion of a data-driven organizational culture.

  14. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  15. Human factors in Big Data

    NARCIS (Netherlands)

    Boer, J. de

    2016-01-01

    Since 2014 I am involved in various (research) projects that try to make the hype around Big Data more concrete and tangible for the industry and government. Big Data is about multiple sources of (real-time) data that can be analysed, transformed to information and be used to make 'smart' decisions.

  16. Slaves to Big Data. Or Are We?

    Directory of Open Access Journals (Sweden)

    Mireille Hildebrandt

    2013-10-01

    Full Text Available

    In this contribution, the notion of Big Data is discussed in relation to the monetisation of personal data. The claim of some proponents, as well as adversaries, that Big Data implies that ‘n = all’, meaning that we no longer need to rely on samples because we have all the data, is scrutinised and found to be both overly optimistic and unnecessarily pessimistic. A set of epistemological and ethical issues is presented, focusing on the implications of Big Data for our perception, cognition, fairness, privacy and due process. The article then looks into the idea of user-centric personal data management to investigate to what extent it provides solutions for some of the problems triggered by the Big Data conundrum. Special attention is paid to the core principle of data protection legislation, namely purpose binding. Finally, this contribution seeks to inquire into the influence of Big Data politics on self, mind and society, and asks how we can prevent ourselves from becoming slaves to Big Data.

  17. Suitability Of Nitisinone In Alkaptonuria 1 (SONIA 1): an international, multicentre, randomised, open-label, no-treatment controlled, parallel-group, dose-response study to investigate the effect of once daily nitisinone on 24-h urinary homogentisic acid excretion in patients with alkaptonuria after 4 weeks of treatment.

    Science.gov (United States)

    Ranganath, Lakshminarayan R; Milan, Anna M; Hughes, Andrew T; Dutton, John J; Fitzgerald, Richard; Briggs, Michael C; Bygott, Helen; Psarelli, Eftychia E; Cox, Trevor F; Gallagher, James A; Jarvis, Jonathan C; van Kan, Christa; Hall, Anthony K; Laan, Dinny; Olsson, Birgitta; Szamosi, Johan; Rudebeck, Mattias; Kullenberg, Torbjörn; Cronlund, Arvid; Svensson, Lennart; Junestrand, Carin; Ayoob, Hana; Timmis, Oliver G; Sireau, Nicolas; Le Quan Sang, Kim-Hanh; Genovese, Federica; Braconi, Daniela; Santucci, Annalisa; Nemethova, Martina; Zatkova, Andrea; McCaffrey, Judith; Christensen, Peter; Ross, Gordon; Imrich, Richard; Rovensky, Jozef

    2016-02-01

    Alkaptonuria (AKU) is a serious genetic disease characterised by premature spondyloarthropathy. Homogentisate-lowering therapy is being investigated for AKU. Nitisinone decreases homogentisic acid (HGA) in AKU but the dose-response relationship has not been previously studied. Suitability Of Nitisinone In Alkaptonuria 1 (SONIA 1) was an international, multicentre, randomised, open-label, no-treatment controlled, parallel-group, dose-response study. The primary objective was to investigate the effect of different doses of nitisinone once daily on 24-h urinary HGA excretion (u-HGA24) in patients with AKU after 4 weeks of treatment. Forty patients were randomised into five groups of eight patients each, with groups receiving no treatment or 1 mg, 2 mg, 4 mg and 8 mg of nitisinone. A clear dose-response relationship was observed between nitisinone and the urinary excretion of HGA. At 4 weeks, the adjusted geometric mean u-HGA24 was 31.53 mmol, 3.26 mmol, 1.44 mmol, 0.57 mmol and 0.15 mmol for the no treatment or 1 mg, 2 mg, 4 mg and 8 mg doses, respectively. For the most efficacious dose, 8 mg daily, this corresponds to a mean reduction of u-HGA24 of 98.8% compared with baseline. An increase in tyrosine levels was seen at all doses but the dose-response relationship was less clear than the effect on HGA. Despite tyrosinaemia, there were no safety concerns and no serious adverse events were reported over the 4 weeks of nitisinone therapy. In this study in patients with AKU, nitisinone therapy decreased urinary HGA excretion to low levels in a dose-dependent manner and was well tolerated within the studied dose range. EudraCT number: 2012-005340-24. Registered at ClinicalTrials.gov: NCTO1828463. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Will Organization Design Be Affected By Big Data?

    Directory of Open Access Journals (Sweden)

    Giles Slinger

    2014-12-01

    Full Text Available Computing power and analytical methods allow us to create, collate, and analyze more data than ever before. When datasets are unusually large in volume, velocity, and variety, they are referred to as “big data.” Some observers have suggested that in order to cope with big data (a organizational structures will need to change and (b the processes used to design organizations will be different. In this article, we differentiate big data from relatively slow-moving, linked people data. We argue that big data will change organizational structures as organizations pursue the opportunities presented by big data. The processes by which organizations are designed, however, will be relatively unaffected by big data. Instead, organization design processes will be more affected by the complex links found in people data.

  19. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  20. Big Data

    OpenAIRE

    Bútora, Matúš

    2017-01-01

    Cieľom bakalárskej práca je popísať problematiku Big Data a agregačné operácie OLAP pre podporu rozhodovania, ktoré sú na ne aplikované pomocou technológie Apache Hadoop. Prevažná časť práce je venovaná popisu práve tejto technológie. Posledná kapitola sa zaoberá spôsobom aplikovania agregačných operácií a problematikou ich realizácie. Nasleduje celkové zhodnotenie práce a možnosti využitia výsledného systému do budúcna. The aim of the bachelor thesis is to describe the Big Data issue and ...

  1. Generating ekpyrotic curvature perturbations before the big bang

    International Nuclear Information System (INIS)

    Lehners, Jean-Luc; Turok, Neil; McFadden, Paul; Steinhardt, Paul J.

    2007-01-01

    We analyze a general mechanism for producing a nearly scale-invariant spectrum of cosmological curvature perturbations during a contracting phase preceding a big bang, which can be entirely described using 4D effective field theory. The mechanism, based on first producing entropic perturbations and then converting them to curvature perturbations, can be naturally incorporated in cyclic and ekpyrotic models in which the big bang is modeled as a brane collision, as well as other types of cosmological models with a pre-big bang phase. We show that the correct perturbation amplitude can be obtained and that the spectral tilt n s tends to range from slightly blue to red, with 0.97 s <1.02 for the simplest models, a range compatible with current observations but shifted by a few percent towards the blue compared to the prediction of the simplest, large-field inflationary models

  2. BigDansing

    KAUST Repository

    Khayyat, Zuhair; Ilyas, Ihab F.; Jindal, Alekh; Madden, Samuel; Ouzzani, Mourad; Papotti, Paolo; Quiané -Ruiz, Jorge-Arnulfo; Tang, Nan; Yin, Si

    2015-01-01

    of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic

  3. Leveraging Mobile Network Big Data for Developmental Policy ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Some argue that big data and big data users offer advantages to generate evidence. ... Supported by IDRC, this research focused on transportation planning in urban ... Using mobile network big data for land use classification CPRsouth 2015.

  4. a randomised controlled trial oftwo prostaglandin regitnens

    African Journals Online (AJOL)

    Design. A prospective randomised controlled trial. Setting. Department of Obstetrics and Gynae- ... hours after the original administration of either prostaglandin regimen. If abortion had not taken place 36 .... Tygerberg Hospital for permission to publish, and Upjohn. (Pry) Ltd for supplying the Prepidil gel used in the study. 1.

  5. Practice Variation in Big-4 Transparency Reports

    DEFF Research Database (Denmark)

    Girdhar, Sakshi; Klarskov Jeppesen, Kim

    2018-01-01

    Purpose: The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach: The study draws...... on a qualitative research approach, in which the content of transparency reports is analyzed and semi-structured interviews are conducted with key people from the Big-4 firms who are responsible for developing the transparency reports. Findings: The findings show that the content of transparency reports...... is inconsistent and the transparency reporting practice is not uniform within the Big-4 networks. Differences were found in the way in which the transparency reporting practices are coordinated globally by the respective central governing bodies of the Big-4. The content of the transparency reports...

  6. Prevalence and reporting of recruitment, randomisation and treatment errors in clinical trials: A systematic review.

    Science.gov (United States)

    Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A

    2018-06-01

    or treatment errors was found in the remaining 50 of 82 trials (61%). Based on responses from 9 of the 15 corresponding authors who were contacted regarding recruitment, randomisation and treatment errors, between 1% and 100% of the errors that occurred in their trials were reported in the trial publications. Conclusion Recruitment, randomisation and treatment errors are common in individually randomised, phase III trials published in leading medical journals, but reporting practices are inadequate and reporting standards are needed. We recommend researchers report all such errors that occurred during the trial and describe how they were handled in trial publications to improve transparency in reporting of clinical trials.

  7. Big data and biomedical informatics: a challenging opportunity.

    Science.gov (United States)

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  8. Was the big bang hot

    International Nuclear Information System (INIS)

    Wright, E.L.

    1983-01-01

    The author considers experiments to confirm the substantial deviations from a Planck curve in the Woody and Richards spectrum of the microwave background, and search for conducting needles in our galaxy. Spectral deviations and needle-shaped grains are expected for a cold Big Bang, but are not required by a hot Big Bang. (Auth.)

  9. Passport to the Big Bang

    CERN Multimedia

    De Melis, Cinzia

    2013-01-01

    Le 2 juin 2013, le CERN inaugure le projet Passeport Big Bang lors d'un grand événement public. Affiche et programme. On 2 June 2013 CERN launches a scientific tourist trail through the Pays de Gex and the Canton of Geneva known as the Passport to the Big Bang. Poster and Programme.

  10. Are hamburgers harmless? : the Big Mac Index in the twenty-first century

    OpenAIRE

    Soo, Kwok Tong

    2016-01-01

    We make use of The Economist’s Big Mac Index (BMI) to investigate the Law of One Price (LOP) and whether the BMI can be used to predict future exchange rate and price changes. Deviations from Big Mac parity decay quickly, in approximately 1 year. The BMI is a better predictor of relative price changes than of exchange rate changes, and performs best when predicting a depreciation of a currency relative to the US dollar. Convergence to Big Mac parity occurs more rapidly for currencies with som...

  11. Identify too big to fail banks and capital insurance: An equilibrium approach

    OpenAIRE

    Katerina Ivanov

    2017-01-01

    The objective of this paper is develop a rational expectation equilibrium model of capital insurance to identify too big to fail banks. The main results of this model include (1) too big to fail banks can be identified explicitly by a systemic risk measure, loss betas, of all banks in the entire financial sector; (2) the too big to fail feature can be largely justified by a high level of loss beta; (3) the capital insurance proposal benefits market participants and reduces the systemic risk; ...

  12. Big data in psychology: Introduction to the special issue.

    Science.gov (United States)

    Harlow, Lisa L; Oswald, Frederick L

    2016-12-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: (a) The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. (b) Availability of large data sets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. (c) Identifying, addressing, and being sensitive to ethical considerations when analyzing large data sets gained from public or private sources. (d) The unavoidable necessity of validating predictive models in big data by applying a model developed on 1 dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Keynote: Big Data, Big Opportunities

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    The enthusiasm for big data is obscuring the complexity and diversity of data in scholarship and the challenges for stewardship. Inside the black box of data are a plethora of research, technology, and policy issues. Data are not shiny objects that are easily exchanged. Rather, data are representations of observations, objects, or other entities used as evidence of phenomena for the purposes of research or scholarship. Data practices are local, varying from field to field, individual to indiv...

  14. Integrating R and Hadoop for Big Data Analysis

    OpenAIRE

    Bogdan Oancea; Raluca Mariana Dragoescu

    2014-01-01

    Analyzing and working with big data could be very diffi cult using classical means like relational database management systems or desktop software packages for statistics and visualization. Instead, big data requires large clusters with hundreds or even thousands of computing nodes. Offi cial statistics is increasingly considering big data for deriving new statistics because big data sources could produce more relevant and timely statistics than traditional sources. One of the software tools ...

  15. The challenges of big data.

    Science.gov (United States)

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  16. Big³. Editorial.

    Science.gov (United States)

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  17. Big defensins, a diverse family of antimicrobial peptides that follows different patterns of expression in hemocytes of the oyster Crassostrea gigas.

    Science.gov (United States)

    Rosa, Rafael D; Santini, Adrien; Fievet, Julie; Bulet, Philippe; Destoumieux-Garzón, Delphine; Bachère, Evelyne

    2011-01-01

    Big defensin is an antimicrobial peptide composed of a highly hydrophobic N-terminal region and a cationic C-terminal region containing six cysteine residues involved in three internal disulfide bridges. While big defensin sequences have been reported in various mollusk species, few studies have been devoted to their sequence diversity, gene organization and their expression in response to microbial infections. Using the high-throughput Digital Gene Expression approach, we have identified in Crassostrea gigas oysters several sequences coding for big defensins induced in response to a Vibrio infection. We showed that the oyster big defensin family is composed of three members (named Cg-BigDef1, Cg-BigDef2 and Cg-BigDef3) that are encoded by distinct genomic sequences. All Cg-BigDefs contain a hydrophobic N-terminal domain and a cationic C-terminal domain that resembles vertebrate β-defensins. Both domains are encoded by separate exons. We found that big defensins form a group predominantly present in mollusks and closer to vertebrate defensins than to invertebrate and fungi CSαβ-containing defensins. Moreover, we showed that Cg-BigDefs are expressed in oyster hemocytes only and follow different patterns of gene expression. While Cg-BigDef3 is non-regulated, both Cg-BigDef1 and Cg-BigDef2 transcripts are strongly induced in response to bacterial challenge. Induction was dependent on pathogen associated molecular patterns but not damage-dependent. The inducibility of Cg-BigDef1 was confirmed by HPLC and mass spectrometry, since ions with a molecular mass compatible with mature Cg-BigDef1 (10.7 kDa) were present in immune-challenged oysters only. From our biochemical data, native Cg-BigDef1 would result from the elimination of a prepropeptide sequence and the cyclization of the resulting N-terminal glutamine residue into a pyroglutamic acid. We provide here the first report showing that big defensins form a family of antimicrobial peptides diverse not only in terms

  18. Big Data in Health: a Literature Review from the Year 2005.

    Science.gov (United States)

    de la Torre Díez, Isabel; Cosgaya, Héctor Merino; Garcia-Zapirain, Begoña; López-Coronado, Miguel

    2016-09-01

    The information stored in healthcare systems has increased over the last ten years, leading it to be considered Big Data. There is a wealth of health information ready to be analysed. However, the sheer volume raises a challenge for traditional methods. The aim of this article is to conduct a cutting-edge study on Big Data in healthcare from 2005 to the present. This literature review will help researchers to know how Big Data has developed in the health industry and open up new avenues for research. Information searches have been made on various scientific databases such as Pubmed, Science Direct, Scopus and Web of Science for Big Data in healthcare. The search criteria were "Big Data" and "health" with a date range from 2005 to the present. A total of 9724 articles were found on the databases. 9515 articles were discarded as duplicates or for not having a title of interest to the study. 209 articles were read, with the resulting decision that 46 were useful for this study. 52.6 % of the articles used were found in Science Direct, 23.7 % in Pubmed, 22.1 % through Scopus and the remaining 2.6 % through the Web of Science. Big Data has undergone extremely high growth since 2011 and its use is becoming compulsory in developed nations and in an increasing number of developing nations. Big Data is a step forward and a cost reducer for public and private healthcare.

  19. Fuzzy VIKOR approach for selection of big data analyst in procurement management

    Directory of Open Access Journals (Sweden)

    Surajit Bag

    2016-07-01

    Full Text Available Background: Big data and predictive analysis have been hailed as the fourth paradigm of science. Big data and analytics are critical to the future of business sustainability. The demand for data scientists is increasing with the dynamic nature of businesses, thus making it indispensable to manage big data, derive meaningful results and interpret management decisions. Objectives: The purpose of this study was to provide a brief conceptual review of big data and analytics and further illustrate the use of a multicriteria decision-making technique in selecting the right skilled candidate for big data and analytics in procurement management. Method: It is important for firms to select and recruit the right data analyst, both in terms of skills sets and scope of analysis. The nature of such a problem is complex and multicriteria decision-making, which deals with both qualitative and quantitative factors. In the current study, an application of the Fuzzy VIsekriterijumska optimizacija i KOmpromisno Resenje (VIKOR method was used to solve the big data analyst selection problem. Results: From this study, it was identified that Technical knowledge (C1, Intellectual curiosity (C4 and Business acumen (C5 are the strongest influential criteria and must be present in the candidate for the big data and analytics job. Conclusion: Fuzzy VIKOR is the perfect technique in this kind of multiple criteria decisionmaking problematic scenario. This study will assist human resource managers and procurement managers in selecting the right workforce for big data analytics.

  20. Pro-inflammatory fatty acid profile and colorectal cancer risk: A Mendelian randomisation analysis.

    Science.gov (United States)

    May-Wilson, Sebastian; Sud, Amit; Law, Philip J; Palin, Kimmo; Tuupanen, Sari; Gylfe, Alexandra; Hänninen, Ulrika A; Cajuso, Tatiana; Tanskanen, Tomas; Kondelin, Johanna; Kaasinen, Eevi; Sarin, Antti-Pekka; Eriksson, Johan G; Rissanen, Harri; Knekt, Paul; Pukkala, Eero; Jousilahti, Pekka; Salomaa, Veikko; Ripatti, Samuli; Palotie, Aarno; Renkonen-Sinisalo, Laura; Lepistö, Anna; Böhm, Jan; Mecklin, Jukka-Pekka; Al-Tassan, Nada A; Palles, Claire; Farrington, Susan M; Timofeeva, Maria N; Meyer, Brian F; Wakil, Salma M; Campbell, Harry; Smith, Christopher G; Idziaszczyk, Shelley; Maughan, Timothy S; Fisher, David; Kerr, Rachel; Kerr, David; Passarelli, Michael N; Figueiredo, Jane C; Buchanan, Daniel D; Win, Aung K; Hopper, John L; Jenkins, Mark A; Lindor, Noralane M; Newcomb, Polly A; Gallinger, Steven; Conti, David; Schumacher, Fred; Casey, Graham; Aaltonen, Lauri A; Cheadle, Jeremy P; Tomlinson, Ian P; Dunlop, Malcolm G; Houlston, Richard S

    2017-10-01

    While dietary fat has been established as a risk factor for colorectal cancer (CRC), associations between fatty acids (FAs) and CRC have been inconsistent. Using Mendelian randomisation (MR), we sought to evaluate associations between polyunsaturated (PUFA), monounsaturated (MUFA) and saturated FAs (SFAs) and CRC risk. We analysed genotype data on 9254 CRC cases and 18,386 controls of European ancestry. Externally weighted polygenic risk scores were generated and used to evaluate associations with CRC per one standard deviation increase in genetically defined plasma FA levels. Risk reduction was observed for oleic and palmitoleic MUFAs (OR OA  = 0.77, 95% CI: 0.65-0.92, P = 3.9 × 10 -3 ; OR POA  = 0.36, 95% CI: 0.15-0.84, P = 0.018). PUFAs linoleic and arachidonic acid had negative and positive associations with CRC respectively (OR LA  = 0.95, 95% CI: 0.93-0.98, P = 3.7 × 10 -4 ; OR AA  = 1.05, 95% CI: 1.02-1.07, P = 1.7 × 10 -4 ). The SFA stearic acid was associated with increased CRC risk (OR SA  = 1.17, 95% CI: 1.01-1.35, P = 0.041). Results from our analysis are broadly consistent with a pro-inflammatory FA profile having a detrimental effect in terms of CRC risk. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Cloud Based Big Data Infrastructure: Architectural Components and Automated Provisioning

    OpenAIRE

    Demchenko, Yuri; Turkmen, Fatih; Blanchet, Christophe; Loomis, Charles; Laat, Caees de

    2016-01-01

    This paper describes the general architecture and functional components of the cloud based Big Data Infrastructure (BDI). The proposed BDI architecture is based on the analysis of the emerging Big Data and data intensive technologies and supported by the definition of the Big Data Architecture Framework (BDAF) that defines the following components of the Big Data technologies: Big Data definition, Data Management including data lifecycle and data structures, Big Data Infrastructure (generical...

  2. Physics with Big Karl Brainstorming. Abstracts

    International Nuclear Information System (INIS)

    Machner, H.; Lieb, J.

    2000-08-01

    Before summarizing details of the meeting, a short description of the spectrometer facility Big Karl is given. The facility is essentially a new instrument using refurbished dipole magnets from its predecessor. The large acceptance quadrupole magnets and the beam optics are new. Big Karl has a design very similar as the focussing spectrometers at MAMI (Mainz), AGOR (Groningen) and the high resolution spectrometer (HRS) in Hall A at Jefferson Laboratory with ΔE/E = 10 -4 but at some lower maximum momentum. The focal plane detectors consisting of multiwire drift chambers and scintillating hodoscopes are similar. Unlike HRS, Big Karl still needs Cerenkov counters and polarimeters in its focal plane; detectors which are necessary to perform some of the experiments proposed during the brainstorming. In addition, BIG KARL allows emission angle reconstruction via track measurements in its focal plane with high resolution. In the following the physics highlights, the proposed and potential experiments are summarized. During the meeting it became obvious that the physics to be explored at Big Karl can be grouped into five distinct categories, and this summary is organized accordingly. (orig.)

  3. The Big Bang (one more time)

    CERN Multimedia

    Spotts, P

    2002-01-01

    For 20 years, Paul Steinhardt has played a key role in helping to write and refine the inflationary "big bang" origin of the universe. But over the past few years, he decided to see if he could come up with a plausible alternative to the prevailing notion (1 page).

  4. Seed bank and big sagebrush plant community composition in a range margin for big sagebrush

    Science.gov (United States)

    Martyn, Trace E.; Bradford, John B.; Schlaepfer, Daniel R.; Burke, Ingrid C.; Laurenroth, William K.

    2016-01-01

    The potential influence of seed bank composition on range shifts of species due to climate change is unclear. Seed banks can provide a means of both species persistence in an area and local range expansion in the case of increasing habitat suitability, as may occur under future climate change. However, a mismatch between the seed bank and the established plant community may represent an obstacle to persistence and expansion. In big sagebrush (Artemisia tridentata) plant communities in Montana, USA, we compared the seed bank to the established plant community. There was less than a 20% similarity in the relative abundance of species between the established plant community and the seed bank. This difference was primarily driven by an overrepresentation of native annual forbs and an underrepresentation of big sagebrush in the seed bank compared to the established plant community. Even though we expect an increase in habitat suitability for big sagebrush under future climate conditions at our sites, the current mismatch between the plant community and the seed bank could impede big sagebrush range expansion into increasingly suitable habitat in the future.

  5. Application and Prospect of Big Data in Water Resources

    Science.gov (United States)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  6. Big Data in food and agriculture

    Directory of Open Access Journals (Sweden)

    Kelly Bronson

    2016-06-01

    Full Text Available Farming is undergoing a digital revolution. Our existing review of current Big Data applications in the agri-food sector has revealed several collection and analytics tools that may have implications for relationships of power between players in the food system (e.g. between farmers and large corporations. For example, Who retains ownership of the data generated by applications like Monsanto Corproation's Weed I.D . “app”? Are there privacy implications with the data gathered by John Deere's precision agricultural equipment? Systematically tracing the digital revolution in agriculture, and charting the affordances as well as the limitations of Big Data applied to food and agriculture, should be a broad research goal for Big Data scholarship. Such a goal brings data scholarship into conversation with food studies and it allows for a focus on the material consequences of big data in society.

  7. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  8. Una aproximación a Big Data = An approach to Big Data

    OpenAIRE

    Puyol Moreno, Javier

    2014-01-01

    Big Data puede ser considerada como una tendencia en el avance de la tecnología que ha abierto la puerta a un nuevo enfoque para la comprensión y la toma de decisiones, que se utiliza para describir las enormes cantidades de datos (estructurados, no estructurados y semi- estructurados) que sería demasiado largo y costoso para cargar una base de datos relacional para su análisis. Así, el concepto de Big Data se aplica a toda la información que no puede ser procesada o analizada utilizando herr...

  9. Toward a Literature-Driven Definition of Big Data in Healthcare.

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  10. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    Science.gov (United States)

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  11. 45 CFR 98.47 - Nondiscrimination in employment on the basis of religion.

    Science.gov (United States)

    2010-10-01

    ... religion. 98.47 Section 98.47 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION... Requirements § 98.47 Nondiscrimination in employment on the basis of religion. (a) In general, except as... religion. (1) Child care providers that receive assistance through grants or contracts under the CCDF shall...

  12. Big Data and Biomedical Informatics: A Challenging Opportunity

    Science.gov (United States)

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  13. Big data governance an emerging imperative

    CERN Document Server

    Soares, Sunil

    2012-01-01

    Written by a leading expert in the field, this guide focuses on the convergence of two major trends in information management-big data and information governance-by taking a strategic approach oriented around business cases and industry imperatives. With the advent of new technologies, enterprises are expanding and handling very large volumes of data; this book, nontechnical in nature and geared toward business audiences, encourages the practice of establishing appropriate governance over big data initiatives and addresses how to manage and govern big data, highlighting the relevant processes,

  14. Big Data and historical social science

    Directory of Open Access Journals (Sweden)

    Peter Bearman

    2015-11-01

    Full Text Available “Big Data” can revolutionize historical social science if it arises from substantively important contexts and is oriented towards answering substantively important questions. Such data may be especially important for answering previously largely intractable questions about the timing and sequencing of events, and of event boundaries. That said, “Big Data” makes no difference for social scientists and historians whose accounts rest on narrative sentences. Since such accounts are the norm, the effects of Big Data on the practice of historical social science may be more limited than one might wish.

  15. SCOPE1: a randomised phase II/III multicentre clinical trial of definitive chemoradiation, with or without cetuximab, in carcinoma of the oesophagus

    International Nuclear Information System (INIS)

    Hurt, Christopher N; Nixon, Lisette S; Griffiths, Gareth O; Al-Mokhtar, Ruby; Gollins, Simon; Staffurth, John N; Phillips, Ceri J; Blazeby, Jane M; Crosby, Tom D

    2011-01-01

    Chemoradiotherapy is the standard of care for patients with oesophageal cancer unsuitable for surgery due to the presence of co-morbidity or extent of disease, and is a standard treatment option for patients with squamous cell carcinoma of the oesophagus. Modern regimens of chemoradiotherapy can lead to significant long-term survival. However the majority of patients will die of their disease, most commonly with local progression/recurrence of their tumours. Cetuximab may overcome one of the principal mechanisms of tumour radio-resistance, namely tumour repopulation, in patients treated with chemoradiotherapy. The purpose of this research is first to determine whether the addition of cetuximab to definitive chemoradiotherapy for treatment of patients with non-metastatic carcinoma of the oesophagus is active (in terms of failure-free rate), safe, and feasible within the context of a multi-centre randomised controlled trial in the UK. If the first stage is successful then the trial will continue to accrue sufficient patients to establish whether the addition of cetuximab to the standard treatment improves overall survival. SCOPE1 is a two arm, open, randomised multicentre Phase II/III trial. Eligible patients will have histologically confirmed carcinoma of the oesophagus and have been chosen to receive definitive chemoradiotherapy by an accredited multidisciplinary team including a specialist Upper GI surgeon. 420 patients will be randomised to receive definitive chemoradiotherapy with or without cetuximab using a 1:1 allocation ratio. During Phase II of the study, the trial will assess safety (toxicity), activity (failure-free rate) and feasibility (recruitment rate and protocol dose modifications/delays) in 90 patients in the experimental arm. If the experimental arm is found to be active, safe, and feasible by the Independent Data Monitoring Committee then recruitment will continue into Phase III. This second stage will recruit a further 120 patients into each arm

  16. SCOPE1: a randomised phase II/III multicentre clinical trial of definitive chemoradiation, with or without cetuximab, in carcinoma of the oesophagus

    Directory of Open Access Journals (Sweden)

    Staffurth John N

    2011-10-01

    Full Text Available Abstract Background Chemoradiotherapy is the standard of care for patients with oesophageal cancer unsuitable for surgery due to the presence of co-morbidity or extent of disease, and is a standard treatment option for patients with squamous cell carcinoma of the oesophagus. Modern regimens of chemoradiotherapy can lead to significant long-term survival. However the majority of patients will die of their disease, most commonly with local progression/recurrence of their tumours. Cetuximab may overcome one of the principal mechanisms of tumour radio-resistance, namely tumour repopulation, in patients treated with chemoradiotherapy. The purpose of this research is first to determine whether the addition of cetuximab to definitive chemoradiotherapy for treatment of patients with non-metastatic carcinoma of the oesophagus is active (in terms of failure-free rate, safe, and feasible within the context of a multi-centre randomised controlled trial in the UK. If the first stage is successful then the trial will continue to accrue sufficient patients to establish whether the addition of cetuximab to the standard treatment improves overall survival. Methods/Design SCOPE1 is a two arm, open, randomised multicentre Phase II/III trial. Eligible patients will have histologically confirmed carcinoma of the oesophagus and have been chosen to receive definitive chemoradiotherapy by an accredited multidisciplinary team including a specialist Upper GI surgeon. 420 patients will be randomised to receive definitive chemoradiotherapy with or without cetuximab using a 1:1 allocation ratio. During Phase II of the study, the trial will assess safety (toxicity, activity (failure-free rate and feasibility (recruitment rate and protocol dose modifications/delays in 90 patients in the experimental arm. If the experimental arm is found to be active, safe, and feasible by the Independent Data Monitoring Committee then recruitment will continue into Phase III. This second

  17. The Inverted Big-Bang

    OpenAIRE

    Vaas, Ruediger

    2004-01-01

    Our universe appears to have been created not out of nothing but from a strange space-time dust. Quantum geometry (loop quantum gravity) makes it possible to avoid the ominous beginning of our universe with its physically unrealistic (i.e. infinite) curvature, extreme temperature, and energy density. This could be the long sought after explanation of the big-bang and perhaps even opens a window into a time before the big-bang: Space itself may have come from an earlier collapsing universe tha...

  18. Minsky on "Big Government"

    Directory of Open Access Journals (Sweden)

    Daniel de Santana Vasconcelos

    2014-03-01

    Full Text Available This paper objective is to assess, in light of the main works of Minsky, his view and analysis of what he called the "Big Government" as that huge institution which, in parallels with the "Big Bank" was capable of ensuring stability in the capitalist system and regulate its inherently unstable financial system in mid-20th century. In this work, we analyze how Minsky proposes an active role for the government in a complex economic system flawed by financial instability.

  19. The Information Panopticon in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Martin Berner

    2014-04-01

    Full Text Available Taking advantage of big data opportunities is challenging for traditional organizations. In this article, we take a panoptic view of big data – obtaining information from more sources and making it visible to all organizational levels. We suggest that big data requires the transformation from command and control hierarchies to post-bureaucratic organizational structures wherein employees at all levels can be empowered while simultaneously being controlled. We derive propositions that show how to best exploit big data technologies in organizations.

  20. De impact van Big Data op Internationale Betrekkingen

    NARCIS (Netherlands)

    Zwitter, Andrej

    Big Data changes our daily lives, but does it also change international politics? In this contribution, Andrej Zwitter (NGIZ chair at Groningen University) argues that Big Data impacts on international relations in ways that we only now start to understand. To comprehend how Big Data influences

  1. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  2. Big data and analytics strategic and organizational impacts

    CERN Document Server

    Morabito, Vincenzo

    2015-01-01

    This book presents and discusses the main strategic and organizational challenges posed by Big Data and analytics in a manner relevant to both practitioners and scholars. The first part of the book analyzes strategic issues relating to the growing relevance of Big Data and analytics for competitive advantage, which is also attributable to empowerment of activities such as consumer profiling, market segmentation, and development of new products or services. Detailed consideration is also given to the strategic impact of Big Data and analytics on innovation in domains such as government and education and to Big Data-driven business models. The second part of the book addresses the impact of Big Data and analytics on management and organizations, focusing on challenges for governance, evaluation, and change management, while the concluding part reviews real examples of Big Data and analytics innovation at the global level. The text is supported by informative illustrations and case studies, so that practitioners...

  3. 49 CFR 98.12 - Administrative sanctions.

    Science.gov (United States)

    2010-10-01

    ... an administrative sanction against a former employee who, after a final administrative decision under... imposed under subsection (a) of this section are: (1) Prohibiting the former employee from making, on... ACTIVITIES Administrative Sanctions § 98.12 Administrative sanctions. (a) The Secretary, in decisions under...

  4. Big Science and Long-tail Science

    CERN Document Server

    2008-01-01

    Jim Downing and I were privileged to be the guests of Salavtore Mele at CERN yesterday and to see the Atlas detector of the Large Hadron Collider . This is a wow experience - although I knew it was big, I hadnt realised how big.

  5. Toward a Literature-Driven Definition of Big Data in Healthcare

    Directory of Open Access Journals (Sweden)

    Emilie Baro

    2015-01-01

    Full Text Available Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n and the number of variables (p for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n*p≥7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR data.

  6. Toward a Literature-Driven Definition of Big Data in Healthcare

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  7. Group sequential designs for stepped-wedge cluster randomised trials.

    Science.gov (United States)

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into

  8. Big-Eyed Bugs Have Big Appetite for Pests

    Science.gov (United States)

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  9. Big Data - What is it and why it matters.

    Science.gov (United States)

    Tattersall, Andy; Grant, Maria J

    2016-06-01

    Big data, like MOOCs, altmetrics and open access, is a term that has been commonplace in the library community for some time yet, despite its prevalence, many in the library and information sector remain unsure of the relationship between big data and their roles. This editorial explores what big data could mean for the day-to-day practice of health library and information workers, presenting examples of big data in action, considering the ethics of accessing big data sets and the potential for new roles for library and information workers. © 2016 Health Libraries Group.

  10. 29 CFR 1915.98 - First aid.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false First aid. 1915.98 Section 1915.98 Labor Regulations...) OCCUPATIONAL SAFETY AND HEALTH STANDARDS FOR SHIPYARD EMPLOYMENT General Working Conditions § 1915.98 First aid...) Unless a first aid room and a qualified attendant are close at hand and prepared to render first aid to...

  11. Research on information security in big data era

    Science.gov (United States)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  12. BIG DATA IN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Logica BANICA

    2015-06-01

    Full Text Available In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big Data. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big Data have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big Data, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big Data concept, the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and NodeXL.

  13. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  14. Big Data als Informationsquelle für regionales Arbeitsmarkt-Monitoring: Online-Stellenanzeigen analysieren mittels "Jobfeed"

    OpenAIRE

    Plaimauer, Claudia

    2016-01-01

    1 Einleitung. 2 Herausforderungen bei der Interpretation von Stelleninseraten. 3 Herausforderungen im Umgang mit Big Data und semantischen Technologien. 4 Die Big-Data-Plattform "Jobfeed". 5 Ausblick. 6 Literatur.

  15. A little big history of Tiananmen

    NARCIS (Netherlands)

    Quaedackers, E.; Grinin, L.E.; Korotayev, A.V.; Rodrigue, B.H.

    2011-01-01

    This contribution aims at demonstrating the usefulness of studying small-scale subjects such as Tiananmen, or the Gate of Heavenly Peace, in Beijing - from a Big History perspective. By studying such a ‘little big history’ of Tiananmen, previously overlooked yet fundamental explanations for why

  16. Systematic reviews of randomised clinical trials examining the effects of psychotherapeutic interventions versus "no intervention" for acute major depressive disorder and a randomised trial examining the effects of "third wave" cognitive therapy versus mentalization-based treatment for acute major depressive disorder.

    Science.gov (United States)

    Jakobsen, Janus Christian

    2014-10-01

    % confidence interval -3.98 to -2.03; p = 0.00001), no significant heterogeneity between trials). Trial sequential analysis confirmed this result. The second systematic review included 12 randomised trials examining the effects of cognitive therapy versus "no intervention" for major depressive disorder. Altogether a total of 669 participants were randomised. All trials had high risk of bias. Meta-analysis showed that cognitive therapy significantly reduced depressive symptoms on the HDRS compared with "no intervention" (four trials; mean difference -3.05 (95% confidence interval, -5.23 to -0.87; p = 0.006)). Trial sequential analysis could not confirm this result. The trial protocol showed that it seemed feasible to conduct a randomised trial with low risks of bias and low risks of random errors examining the effects of "third wave" cognitive therapy versus mentalization-based therapy in a setting in the Danish healthcare system. It turned out to be much more difficult to recruit participants in the randomised trial than expected. We only included about half of the planned participants. The results from the randomised trial showed that participants randomised to "third wave" therapy compared with participants randomised to mentalization-based treatment had borderline significantly lower HDRS scores at 18 weeks in an unadjusted analysis (mean difference -4.14 score; 95% CI -8.30 to 0.03; p = 0.051). In the adjusted analysis, the difference was significant (p = 0.039). Five (22.7%) of the participants randomised to "third wave" cognitive therapy had remission at 18 weeks versus none of the participants randomised to mentalization-based treatment (p = 0.049). Sequential analysis showed that these findings could be due to random errors. No significant differences between the two groups was found regarding Beck's Depression Inventory (BDI II), Symptom Checklist 90 Revised (SCL 90-R), and The World Health Organization-Five Well-being Index 1999 (WHO 5). We concluded that cognitive

  17. Korean Version of the Delirium Rating Scale-Revised-98: Reliability and Validity

    Science.gov (United States)

    Ryu, Jian; Lee, Jinyoung; Kim, Hwi-Jung; Shin, Im Hee; Kim, Jeong-Lan; Trzepacz, Paula T.

    2011-01-01

    Objective The aims of the present study were 1) to standardize the validity and reliability of the Korean version of Delirium Rating Scale-Revised-98 (DRS-R98-K) and 2) to establish the optimum cut-off value, sensitivity, and specificity for discriminating delirium from other non-delirious psychiatric conditions. Methods Using DSM-IV criteria, 157 subjects (69 delirium, 29 dementia, 32 schizophrenia, and 27 other psychiatric patients) were enrolled. Subjects were evaluated using DRS-R98-K, DRS-K, Mini-Mental State Examination (MMSE-K), and Clinical Global Impression-Severity (CGI-S) scale. Results DRS-R98-K total and severity scores showed high correlations with DRS-K. They were significantly different across all groups (p=0.000). However, neither MMSE-K nor CGI-S distinguished delirium from dementia. All DRS-R98-K diagnostic items (#14-16) and items #1 and 2 significantly discriminated delirium from dementia. Cronbach's alpha coefficient revealed high internal consistency for DRS-R98-K total (r=0.91) and severity (r=0.89) scales. Interrater reliability (ICC between 0.96 and 1) was very high. Using receiver operating characteristic analysis, the area under the curve of DRS-R98-K total score was 0.948 between the delirium group and all other groups and 0.873 between the delirium and dementia groups. The best cut-off scores in DRS-R98-K total score were 18.5 and 19.5 between the delirium and the other three groups and 20.5 between the delirium and dementia groups. Conclusion We demonstrated that DRS-R98-K is a valid and reliable instrument for assessing delirium severity and diagnosis and discriminating delirium from dementia and other psychiatric disorders in Korean patients. PMID:21519534

  18. Big-pharmaceuticalisation: clinical trials and Contract Research Organisations in India.

    Science.gov (United States)

    Sariola, Salla; Ravindran, Deapica; Kumar, Anand; Jeffery, Roger

    2015-04-01

    The World Trade Organisation's Trade Related Intellectual Property Rights [TRIPS] agreement aimed to harmonise intellectual property rights and patent protection globally. In India, the signing of this agreement resulted in a sharp increase in clinical trials since 2005. The Indian government, along with larger Indian pharmaceutical companies, believed that they could change existing commercial research cultures through the promotion of basic research as well as attracting international clinical trials, and thus create an international level, innovation-based drug industry. The effects of the growth of these outsourced and off-shored clinical trials on local commercial knowledge production in India are still unclear. What has been the impact of the increasing scale and commercialisation of clinical research on corporate science in India? In this paper we describe Big-pharmaceuticalisation in India, whereby the local pharmaceutical industry is moving from generic manufacturing to innovative research. Using conceptual frameworks of pharmaceuticalisation and innovation, this paper analyses data from research conducted in 2010-2012 and describes how Contract Research Organisations (CROs) enable outsourcing of randomised control trials to India. Focussing on twenty-five semi-structured interviews CRO staff, we chart the changes in Indian pharmaceutical industry, and implications for local research cultures. We use Big-pharmaceuticalisation to extend the notion of pharmaceuticalisation to describe the spread of pharmaceutical research globally and illustrate how TRIPS has encouraged a concentration of capital in India, with large companies gaining increasing market share and using their market power to rewrite regulations and introduce new regulatory practices in their own interest. Contract Research Organisations, with relevant, new, epistemic skills and capacities, are both manifestations of the changes in commercial research cultures, as well as the vehicles to

  19. 46 CFR 98.30-7 - Smoking.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Smoking. 98.30-7 Section 98.30-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS SPECIAL CONSTRUCTION, ARRANGEMENT, AND OTHER PROVISIONS FOR CERTAIN DANGEROUS CARGOES IN BULK Portable Tanks § 98.30-7 Smoking. No person may smoke within 50 feet of a...

  20. Big Defensins, a Diverse Family of Antimicrobial Peptides That Follows Different Patterns of Expression in Hemocytes of the Oyster Crassostrea gigas

    Science.gov (United States)

    Rosa, Rafael D.; Santini, Adrien; Fievet, Julie; Bulet, Philippe; Destoumieux-Garzón, Delphine; Bachère, Evelyne

    2011-01-01

    Background Big defensin is an antimicrobial peptide composed of a highly hydrophobic N-terminal region and a cationic C-terminal region containing six cysteine residues involved in three internal disulfide bridges. While big defensin sequences have been reported in various mollusk species, few studies have been devoted to their sequence diversity, gene organization and their expression in response to microbial infections. Findings Using the high-throughput Digital Gene Expression approach, we have identified in Crassostrea gigas oysters several sequences coding for big defensins induced in response to a Vibrio infection. We showed that the oyster big defensin family is composed of three members (named Cg-BigDef1, Cg-BigDef2 and Cg-BigDef3) that are encoded by distinct genomic sequences. All Cg-BigDefs contain a hydrophobic N-terminal domain and a cationic C-terminal domain that resembles vertebrate β-defensins. Both domains are encoded by separate exons. We found that big defensins form a group predominantly present in mollusks and closer to vertebrate defensins than to invertebrate and fungi CSαβ-containing defensins. Moreover, we showed that Cg-BigDefs are expressed in oyster hemocytes only and follow different patterns of gene expression. While Cg-BigDef3 is non-regulated, both Cg-BigDef1 and Cg-BigDef2 transcripts are strongly induced in response to bacterial challenge. Induction was dependent on pathogen associated molecular patterns but not damage-dependent. The inducibility of Cg-BigDef1 was confirmed by HPLC and mass spectrometry, since ions with a molecular mass compatible with mature Cg-BigDef1 (10.7 kDa) were present in immune-challenged oysters only. From our biochemical data, native Cg-BigDef1 would result from the elimination of a prepropeptide sequence and the cyclization of the resulting N-terminal glutamine residue into a pyroglutamic acid. Conclusions We provide here the first report showing that big defensins form a family of antimicrobial

  1. Big defensins, a diverse family of antimicrobial peptides that follows different patterns of expression in hemocytes of the oyster Crassostrea gigas.

    Directory of Open Access Journals (Sweden)

    Rafael D Rosa

    Full Text Available BACKGROUND: Big defensin is an antimicrobial peptide composed of a highly hydrophobic N-terminal region and a cationic C-terminal region containing six cysteine residues involved in three internal disulfide bridges. While big defensin sequences have been reported in various mollusk species, few studies have been devoted to their sequence diversity, gene organization and their expression in response to microbial infections. FINDINGS: Using the high-throughput Digital Gene Expression approach, we have identified in Crassostrea gigas oysters several sequences coding for big defensins induced in response to a Vibrio infection. We showed that the oyster big defensin family is composed of three members (named Cg-BigDef1, Cg-BigDef2 and Cg-BigDef3 that are encoded by distinct genomic sequences. All Cg-BigDefs contain a hydrophobic N-terminal domain and a cationic C-terminal domain that resembles vertebrate β-defensins. Both domains are encoded by separate exons. We found that big defensins form a group predominantly present in mollusks and closer to vertebrate defensins than to invertebrate and fungi CSαβ-containing defensins. Moreover, we showed that Cg-BigDefs are expressed in oyster hemocytes only and follow different patterns of gene expression. While Cg-BigDef3 is non-regulated, both Cg-BigDef1 and Cg-BigDef2 transcripts are strongly induced in response to bacterial challenge. Induction was dependent on pathogen associated molecular patterns but not damage-dependent. The inducibility of Cg-BigDef1 was confirmed by HPLC and mass spectrometry, since ions with a molecular mass compatible with mature Cg-BigDef1 (10.7 kDa were present in immune-challenged oysters only. From our biochemical data, native Cg-BigDef1 would result from the elimination of a prepropeptide sequence and the cyclization of the resulting N-terminal glutamine residue into a pyroglutamic acid. CONCLUSIONS: We provide here the first report showing that big defensins form a family

  2. Addressing big data issues in Scientific Data Infrastructure

    NARCIS (Netherlands)

    Demchenko, Y.; Membrey, P.; Grosso, P.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2013-01-01

    Big Data are becoming a new technology focus both in science and in industry. This paper discusses the challenges that are imposed by Big Data on the modern and future Scientific Data Infrastructure (SDI). The paper discusses a nature and definition of Big Data that include such features as Volume,

  3. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  4. Big Data - Smart Health Strategies

    Science.gov (United States)

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  5. About Big Data and its Challenges and Benefits in Manufacturing

    OpenAIRE

    Bogdan NEDELCU

    2013-01-01

    The aim of this article is to show the importance of Big Data and its growing influence on companies. It also shows what kind of big data is currently generated and how much big data is estimated to be generated. We can also see how much are the companies willing to invest in big data and how much are they currently gaining from their big data. There are also shown some major influences that big data has over one major segment in the industry (manufacturing) and the challenges that appear.

  6. Big Data Management in US Hospitals: Benefits and Barriers.

    Science.gov (United States)

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  7. Big Data Strategy for Telco: Network Transformation

    OpenAIRE

    F. Amin; S. Feizi

    2014-01-01

    Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and ...

  8. Big Data in Shipping - Challenges and Opportunities

    OpenAIRE

    Rødseth, Ørnulf Jan; Perera, Lokukaluge Prasad; Mo, Brage

    2016-01-01

    Big Data is getting popular in shipping where large amounts of information is collected to better understand and improve logistics, emissions, energy consumption and maintenance. Constraints to the use of big data include cost and quality of on-board sensors and data acquisition systems, satellite communication, data ownership and technical obstacles to effective collection and use of big data. New protocol standards may simplify the process of collecting and organizing the data, including in...

  9. [Relevance of big data for molecular diagnostics].

    Science.gov (United States)

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  10. Are pilot trials useful for predicting randomisation and attrition rates in definitive studies: A review of publicly funded trials

    Science.gov (United States)

    Whitehead, Amy; Pottrill, Edward; Julious, Steven A; Walters, Stephen J

    2018-01-01

    Background/aims: External pilot trials are recommended for testing the feasibility of main or confirmatory trials. However, there is little evidence that progress in external pilot trials actually predicts randomisation and attrition rates in the main trial. To assess the use of external pilot trials in trial design, we compared randomisation and attrition rates in publicly funded randomised controlled trials with rates in their pilots. Methods: Randomised controlled trials for which there was an external pilot trial were identified from reports published between 2004 and 2013 in the Health Technology Assessment Journal. Data were extracted from published papers, protocols and reports. Bland–Altman plots and descriptive statistics were used to investigate the agreement of randomisation and attrition rates between the full and external pilot trials. Results: Of 561 reports, 41 were randomised controlled trials with pilot trials and 16 met criteria for a pilot trial with sufficient data. Mean attrition and randomisation rates were 21.1% and 50.4%, respectively, in the pilot trials and 16.8% and 65.2% in the main. There was minimal bias in the pilot trial when predicting the main trial attrition and randomisation rate. However, the variation was large: the mean difference in the attrition rate between the pilot and main trial was −4.4% with limits of agreement of −37.1% to 28.2%. Limits of agreement for randomisation rates were −47.8% to 77.5%. Conclusion: Results from external pilot trials to estimate randomisation and attrition rates should be used with caution as comparison of the difference in the rates between pilots and their associated full trial demonstrates high variability. We suggest using internal pilot trials wherever appropriate. PMID:29361833

  11. Big data in psychology: A framework for research advancement.

    Science.gov (United States)

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. 'Big data' in pharmaceutical science: challenges and opportunities.

    Science.gov (United States)

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  13. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  14. Phantom inflation and the 'Big Trip'

    International Nuclear Information System (INIS)

    Gonzalez-Diaz, Pedro F.; Jimenez-Madrid, Jose A.

    2004-01-01

    Primordial inflation is regarded to be driven by a phantom field which is here implemented as a scalar field satisfying an equation of state p=ωρ, with ω-1. Being even aggravated by the weird properties of phantom energy, this will pose a serious problem with the exit from the inflationary phase. We argue, however, in favor of the speculation that a smooth exit from the phantom inflationary phase can still be tentatively recovered by considering a multiverse scenario where the primordial phantom universe would travel in time toward a future universe filled with usual radiation, before reaching the big rip. We call this transition the 'Big Trip' and assume it to take place with the help of some form of anthropic principle which chooses our current universe as being the final destination of the time transition

  15. [Structural Change, Contextuality, and Transfer in Health Promotion--Sustainable Implementation of the BIG Project].

    Science.gov (United States)

    Rütten, A; Frahsa, A; Rosenhäger, N; Wolff, A

    2015-09-01

    The BIG approach aims at promoting physical activity and health among socially disadvantaged women. BIG has been developed and sustainably implemented in Erlangen/Bavaria. Subsequently, it has been transferred to other communities and states in Germany. Crucial factors for sustainability and transfer in BIG are (1) lifestyle and policy analysis, (2) assets approach, (3) empowerment of target group, (4) enabling of policy-makers and professionals. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Solution of a braneworld big crunch/big bang cosmology

    International Nuclear Information System (INIS)

    McFadden, Paul L.; Turok, Neil; Steinhardt, Paul J.

    2007-01-01

    We solve for the cosmological perturbations in a five-dimensional background consisting of two separating or colliding boundary branes, as an expansion in the collision speed V divided by the speed of light c. Our solution permits a detailed check of the validity of four-dimensional effective theory in the vicinity of the event corresponding to the big crunch/big bang singularity. We show that the four-dimensional description fails at the first nontrivial order in (V/c) 2 . At this order, there is nontrivial mixing of the two relevant four-dimensional perturbation modes (the growing and decaying modes) as the boundary branes move from the narrowly separated limit described by Kaluza-Klein theory to the well-separated limit where gravity is confined to the positive-tension brane. We comment on the cosmological significance of the result and compute other quantities of interest in five-dimensional cosmological scenarios

  17. Heterogeneity of pituitary and plasma prolactin in man: decreased affinity of big prolactin in a radioreceptor assay and evidence for its secretion

    International Nuclear Information System (INIS)

    Garnier, P.E.; Aubert, M.L.; Kaplan, S.L.; Grumbach, M.M.

    1978-01-01

    Molecular heterogeneity of immunoreactive human PRL (IR-hPRL) plasma was assessed by exclusion chromatography in blood from 4 normal adults, 3 newborn infants, 2 late gestational women, 3 patients with primary hypothyroidism and high PRL levels, 2 with functional hyperprolactinemia, 3 with acromegaly, and 10 with PRL-secreting tumors. Three forms of PRL were detected: big-big hPRL, big hPRL, and little hPRL. In normal subjects, the proportion of big-big, big, and little hPRL components was 5.1%, 9.1%, and 85.8%, respectively, without change in the distribution after TRF stimulation. In 8 of 10 patients with PRL-secreting tumors, we detected a significantly higher proportion of big PRL. In 2 additional patients with prolactinomas, the proportion of big PRL was much higher. In 3 of 10 patients, the molecular heterogeneity of the tumor PRL was similar to that in plasma. In 1 acromegalic, there was a very high proportion of big-big hPRL. The PRL fractions were tested in a radioreceptor assay (RRA) using membranes from rabbit mammary gland. Big PRL was much less active than little PRL in the RRA. The fractions were rechromatographed after storage. Big PRL partially distributed as little or big-big PRL, while little PRL remained unchanged. Big-big PRL from tumor extract partially converted into big and little PRL. The big PRL obtained by rechromatography had low activity in the RRA. These observations suggest at least part of the receptor activity of big PRL may arise from generation of or contamination by little PRL. The decreased binding affinity of big PRL in the RRA also indicates that big PRL has little, if any, biological activity. The evidence suggests big PRL is a native PRL dimer linked by intermolecular disulfide bonds which arises in the lactotrope as a postsynthetic product or derivative and is not a true precursor prohormone

  18. [Big data and their perspectives in radiation therapy].

    Science.gov (United States)

    Guihard, Sébastien; Thariat, Juliette; Clavier, Jean-Baptiste

    2017-02-01

    The concept of big data indicates a change of scale in the use of data and data aggregation into large databases through improved computer technology. One of the current challenges in the creation of big data in the context of radiation therapy is the transformation of routine care items into dark data, i.e. data not yet collected, and the fusion of databases collecting different types of information (dose-volume histograms and toxicity data for example). Processes and infrastructures devoted to big data collection should not impact negatively on the doctor-patient relationship, the general process of care or the quality of the data collected. The use of big data requires a collective effort of physicians, physicists, software manufacturers and health authorities to create, organize and exploit big data in radiotherapy and, beyond, oncology. Big data involve a new culture to build an appropriate infrastructure legally and ethically. Processes and issues are discussed in this article. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  19. Current applications of big data in obstetric anesthesiology.

    Science.gov (United States)

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  20. Lifting gear crucial in Big Bang experiment

    CERN Multimedia

    2007-01-01

    "On November 26 2007, the most complex scientific instrument ever built will be turned on in an attempt to rerun the Big Bang - but i would never have got off the ground - litteraly - without the hundreds of hoists and cranes on site." (1/2 page)

  1. Volume and Value of Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  2. Indian microchip for Big Bang research in Geneva

    CERN Multimedia

    Bhabani, Soudhriti

    2007-01-01

    "A premier nuclear physics institute here has come up with India's first indigenously designed microchip that will facilitate research on the Big Bang theory in Geneva's CERN, the world's largest particle physics laboratory." (1 page)

  3. Using Big Book to Teach Things in My House

    OpenAIRE

    Effrien, Intan; Lailatus, Sa’diyah; Nuruliftitah Maja, Neneng

    2017-01-01

    The purpose of this study to determine students' interest in learning using the big book media. Big book is a big book from the general book. The big book contains simple words and images that match the content of sentences and spelling. From here researchers can know the interest and development of students' knowledge. As well as train researchers to remain crative in developing learning media for students.

  4. Big Data Analytics Platforms analyze from startups to traditional database players

    Directory of Open Access Journals (Sweden)

    Ionut TARANU

    2015-07-01

    Full Text Available Big data analytics enables organizations to analyze a mix of structured, semi-structured and unstructured data in search of valuable business information and insights. The analytical findings can lead to more effective marketing, new revenue opportunities, better customer service, improved operational efficiency, competitive advantages over rival organizations and other business benefits. With so many emerging trends around big data and analytics, IT organizations need to create conditions that will allow analysts and data scientists to experiment. "You need a way to evaluate, prototype and eventually integrate some of these technologies into the business," says Chris Curran[1]. In this paper we are going to review 10 Top Big Data Analytics Platforms and compare the key-features.

  5. Big Data Analytics Methodology in the Financial Industry

    Science.gov (United States)

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  6. Heart rate acceleration with GLP-1 receptor agonists in type 2 diabetes patients : an acute and 12-week randomised, double-blind, placebo-controlled trial

    NARCIS (Netherlands)

    Smits, Mark M; Tonneijck, Lennart; Muskiet, Marcel H A; Hoekstra, T.; Kramer, Mark H H; Diamant, Michaela; van Raalte, Daniël H

    OBJECTIVE: To examine mechanisms underlying resting heart rate (RHR) increments of GLP-1 receptor agonists in type 2 diabetes patients. DESIGN: Acute and 12-week randomised, placebo-controlled, double-blind, single-centre, parallel-group trial. METHODS: In total, 57 type 2 diabetes patients

  7. Big data: survey, technologies, opportunities, and challenges.

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  8. Big Data: Survey, Technologies, Opportunities, and Challenges

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  9. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    Science.gov (United States)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  10. Hot big bang or slow freeze?

    Science.gov (United States)

    Wetterich, C.

    2014-09-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze - a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple ;crossover model; without a big bang singularity. In the infinite past space-time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  11. Big Data

    DEFF Research Database (Denmark)

    Aaen, Jon; Nielsen, Jeppe Agger

    2016-01-01

    Big Data byder sig til som en af tidens mest hypede teknologiske innovationer, udråbt til at rumme kimen til nye, værdifulde operationelle indsigter for private virksomheder og offentlige organisationer. Mens de optimistiske udmeldinger er mange, er forskningen i Big Data i den offentlige sektor...... indtil videre begrænset. Denne artikel belyser, hvordan den offentlige sundhedssektor kan genanvende og udnytte en stadig større mængde data under hensyntagen til offentlige værdier. Artiklen bygger på et casestudie af anvendelsen af store mængder sundhedsdata i Dansk AlmenMedicinsk Database (DAMD......). Analysen viser, at (gen)brug af data i nye sammenhænge er en flerspektret afvejning mellem ikke alene økonomiske rationaler og kvalitetshensyn, men også kontrol over personfølsomme data og etiske implikationer for borgeren. I DAMD-casen benyttes data på den ene side ”i den gode sags tjeneste” til...

  12. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    Science.gov (United States)

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  13. Big data analytics in healthcare: promise and potential.

    Science.gov (United States)

    Raghupathi, Wullianallur; Raghupathi, Viju

    2014-01-01

    To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.

  14. The Death of the Big Men

    DEFF Research Database (Denmark)

    Martin, Keir

    2010-01-01

    Recently Tolai people og Papua New Guinea have adopted the term 'Big Shot' to decribe an emerging post-colonial political elite. The mergence of the term is a negative moral evaluation of new social possibilities that have arisen as a consequence of the Big Shots' privileged position within a glo...

  15. Big data and software defined networks

    CERN Document Server

    Taheri, Javid

    2018-01-01

    Big Data Analytics and Software Defined Networking (SDN) are helping to drive the management of data usage of the extraordinary increase of computer processing power provided by Cloud Data Centres (CDCs). This new book investigates areas where Big-Data and SDN can help each other in delivering more efficient services.

  16. Affordable moisturisers are effective in atopic eczema: A randomised ...

    African Journals Online (AJOL)

    Background. Many patients depend on moisturisers issued by public health services in the management of atopic dermatitis (AD). Methods. In a randomised controlled trial of patients with mild to moderate AD, aged 1 - 12 years, study 1 compared aqueous cream v. liquid paraffin (fragrance-free baby oil) as a soap substitute ...

  17. Big Data-Survey

    Directory of Open Access Journals (Sweden)

    P.S.G. Aruna Sri

    2016-03-01

    Full Text Available Big data is the term for any gathering of information sets, so expensive and complex, that it gets to be hard to process for utilizing customary information handling applications. The difficulties incorporate investigation, catch, duration, inquiry, sharing, stockpiling, Exchange, perception, and protection infringement. To reduce spot business patterns, anticipate diseases, conflict etc., we require bigger data sets when compared with the smaller data sets. Enormous information is hard to work with utilizing most social database administration frameworks and desktop measurements and perception bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. In this paper there was an observation on Hadoop architecture, different tools used for big data and its security issues.

  18. USE OF BIG DATA ANALYTICS FOR CUSTOMER RELATIONSHIP MANAGEMENT: POINT OF PARITY OR SOURCE OF COMPETITIVE ADVANTAGE?

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas; Zablah, Alex R.; Straub, Detmar W.

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (CA use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: 1. What are the key antecedents of big data customer analytics use? 2. How, and to what extent, does big data...

  19. Urban Environment in European Big Cities

    Czech Academy of Sciences Publication Activity Database

    Vaishar, Antonín; Cetkovský, Stanislav; Kallabová, Eva; Klusáček, Petr; Kolibová, Barbora; Lacina, Jan; Mikulík, Oldřich; Zapletalová, Jana

    2006-01-01

    Roč. 14, č. 1 (2006), s. 46-62 ISSN 1210-8812 Grant - others:EU(XE) EVK4-CT-2002-00086 Institutional research plan: CEZ:AV0Z3086906 Keywords : European big cities * urban environment * reurbanisation * životní prostředí * Bologna * Ljubljana * León * Brno * Leipzig Subject RIV: DE - Earth Magnetism, Geodesy, Geography

  20. Big Data Analytics, Infectious Diseases and Associated Ethical Impacts

    OpenAIRE

    Garattini, C.; Raffle, J.; Aisyah, D. N.; Sartain, F.; Kozlakidis, Z.

    2017-01-01

    The exponential accumulation, processing and accrual of big data in healthcare are only possible through an equally rapidly evolving field of big data analytics. The latter offers the capacity to rationalize, understand and use big data to serve many different purposes, from improved services modelling to prediction of treatment outcomes, to greater patient and disease stratification. In the area of infectious diseases, the application of big data analytics has introduced a number of changes ...

  1. Evaluation of Data Management Systems for Geospatial Big Data

    OpenAIRE

    Amirian, Pouria; Basiri, Anahid; Winstanley, Adam C.

    2014-01-01

    Big Data encompasses collection, management, processing and analysis of the huge amount of data that varies in types and changes with high frequency. Often data component of Big Data has a positional component as an important part of it in various forms, such as postal address, Internet Protocol (IP) address and geographical location. If the positional components in Big Data extensively used in storage, retrieval, analysis, processing, visualization and knowledge discovery (geospatial Big Dat...

  2. Modified big-bubble technique compared to manual dissection deep anterior lamellar keratoplasty in the treatment of keratoconus.

    Science.gov (United States)

    Knutsson, Karl Anders; Rama, Paolo; Paganoni, Giorgio

    2015-08-01

    To evaluate the clinical findings and results of manual dissection deep anterior lamellar keratoplasty (DALK) compared to a modified big-bubble DALK technique in eyes affected by keratoconus. Sixty eyes of 60 patients with keratoconus were treated with one of the two surgical techniques manual DALK (n = 30); big-bubble DALK (n = 30). The main outcomes measured were visual acuity, corneal topographic parameters, thickness of residual stroma and endothelial cell density (ECD). Patients were examined postoperatively at 1 month, 6 months, 1 year and 1 month after suture removal. Final best spectacle-corrected visual acuity (BSCVA) measured 1 month after suture removal was 0.11 ± 0.08 LogMAR in the big-bubble group compared to 0.13 ± 0.08 in the manual DALK group (p = 0.227). In patients treated with the big-bubble technique without complications (Descemet's membrane completely bared), the stromal residue was not measureable. Mean stromal residual thickness in the manual DALK group was 30.50 ± 27.60 μm. Data analysis of the manual DALK group demonstrated a significant correlation between BSCVA and residual stromal thickness; lower residual stromal thickness correlated with better BSCVA values (Spearman ρ = 0.509, p = 0.018). Postoperative ECD was similar in both groups at all intervals, with no statistically significant differences. In both groups, ECD loss was only significant during the 1- to 6-month interval (p = 0.001 and p big-bubble DALK and manual DALK groups, respectively). Manual DALK provides comparable results to big-bubble DALK. Big-bubble DALK permits faster visual recovery and is a surgical technique, which can be easily converted to manual DALK in cases of unsuccessful 'big-bubble' formation. © 2015 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  3. 40 CFR 98.173 - Calculating GHG emissions.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Iron and Steel Production § 98.173 Calculating GHG emissions... for the process as specified in paragraphs (b)(1)(i) through (b)(1)(vii) of this section. The... the gaseous fuel (kg/kg-mole). MVC = Molar volume conversion factor (849.5 scf per kg-mole at standard...

  4. A New Look at Big History

    Science.gov (United States)

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  5. West Virginia's big trees: setting the record straight

    Science.gov (United States)

    Melissa Thomas-Van Gundy; Robert. Whetsell

    2016-01-01

    People love big trees, people love to find big trees, and people love to find big trees in the place they call home. Having been suspicious for years, my coauthor and historian Rob Whetsell, approached me with a species identification challenge. There are several photographs of giant trees used by many people to illustrate the past forests of West Virginia,...

  6. Sosiaalinen asiakassuhdejohtaminen ja big data

    OpenAIRE

    Toivonen, Topi-Antti

    2015-01-01

    Tässä tutkielmassa käsitellään sosiaalista asiakassuhdejohtamista sekä hyötyjä, joita siihen voidaan saada big datan avulla. Sosiaalinen asiakassuhdejohtaminen on terminä uusi ja monille tuntematon. Tutkimusta motivoi aiheen vähäinen tutkimus, suomenkielisen tutkimuksen puuttuminen kokonaan sekä sosiaalisen asiakassuhdejohtamisen mahdollinen olennainen rooli yritysten toiminnassa tulevaisuudessa. Big dataa käsittelevissä tutkimuksissa keskitytään monesti sen tekniseen puoleen, eikä sovellutuk...

  7. Prevention of head louse infestation: a randomised, double-blind, cross-over study of a novel concept product, 1% 1,2-octanediol spray versus placebo.

    Science.gov (United States)

    Burgess, Ian F; Brunton, Elizabeth R; French, Rebecca; Burgess, Nazma A; Lee, Peter N

    2014-05-30

    To determine whether regular use of a spray containing 1,2-octanediol 1%, which has been shown to inhibit survival of head lice, is able to work as a preventive against establishment of new infestations. Randomised, double-blind, cross-over, community study in Cambridgeshire, UK. 63 male and female schoolchildren aged 4-16 years judged to have a high risk of recurrent infestation. Only the youngest member of a household attending school participated. Participants were treated to eliminate lice, randomised between 1% octanediol or placebo sprays for 6 weeks then crossed-over to the other spray for 6 weeks. Parents applied the sprays at least twice weekly or more frequently if the hair was washed. Investigators monitored weekly for infestation and replenished supplies of spray. The primary endpoint was the time taken until the first infestation event occurred. The secondary measure was safety of the product in regular use. Intention-to-treat analysis found a total of 32 confirmed infestations in 20 participants, with 9 of them infested while using both products. In these nine participants the time to first infestation showed a significant advantage to 1% octanediol (p=0.0129). Per-protocol analysis showed only trends because the population included was not large enough to demonstrate significance. There were no serious adverse events and only two adverse events possibly related to treatment, one was a case of transient erythema and another of a rash that resolved after 5 days. Routine use of 1% octanediol spray provided a significant level of protection from infestation. It was concluded that this product is effective if applied regularly and thoroughly. ISRCTN09524995. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. A Matrix Big Bang

    OpenAIRE

    Craps, Ben; Sethi, Savdeep; Verlinde, Erik

    2005-01-01

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matr...

  9. Meta-analysis of Big Five personality traits in autism spectrum disorder.

    Science.gov (United States)

    Lodi-Smith, Jennifer; Rodgers, Jonathan D; Cunningham, Sara A; Lopata, Christopher; Thomeer, Marcus L

    2018-04-01

    The present meta-analysis synthesizes the emerging literature on the relationship of Big Five personality traits to autism spectrum disorder. Studies were included if they (1) either (a) measured autism spectrum disorder characteristics using a metric that yielded a single score quantification of the magnitude of autism spectrum disorder characteristics and/or (b) studied individuals with an autism spectrum disorder diagnosis compared to individuals without an autism spectrum disorder diagnosis and (2) measured Big Five traits in the same sample or samples. Fourteen reviewed studies include both correlational analyses and group comparisons. Eighteen effect sizes per Big Five trait were used to calculate two overall effect sizes per trait. Meta-analytic effects were calculated using random effects models. Twelve effects (per trait) from nine studies reporting correlations yielded a negative association between each Big Five personality trait and autism spectrum disorder characteristics (Fisher's z ranged from -.21 (conscientiousness) to -.50 (extraversion)). Six group contrasts (per trait) from six studies comparing individuals diagnosed with autism spectrum disorder to neurotypical individuals were also substantial (Hedges' g ranged from -.88 (conscientiousness) to -1.42 (extraversion)). The potential impact of personality on important life outcomes and new directions for future research on personality in autism spectrum disorder are discussed in light of results.

  10. Astroinformatics: the big data of the universe

    OpenAIRE

    Barmby, Pauline

    2016-01-01

    In astrophysics we like to think that our field was the originator of big data, back when it had to be carried around in big sky charts and books full of tables. These days, it's easier to move astrophysics data around, but we still have a lot of it, and upcoming telescope  facilities will generate even more. I discuss how astrophysicists approach big data in general, and give examples from some Western Physics & Astronomy research projects.  I also give an overview of ho...

  11. Healthcare costs in the Danish randomised controlled lung cancer CT-screening trial

    DEFF Research Database (Denmark)

    Rasmussen, J.F.; Siersma, V.; Pedersen, Jesper H.

    2014-01-01

    : This registry study was nested in a randomised controlled trial (DLCST). 4104 participants, current or former heavy smokers, aged 50-70 years were randomised to five annual low dose CT scans or usual care during 2004-2010. Total healthcare costs and healthcare utilisation data for both the primary...... and the secondary healthcare sector were retrieved from public registries from randomisation - September 2011 and compared between (1) the CT-screening group and the control group and, (2) the control group and each of the true-positive, false-positive and true-negative groups. RESULTS: The median annual costs per...... participant were significantly higher in the CT-screening group (Euros [EUR] 1342, interquartile range [IQR] 750-2980) compared with the control group (EUR 1190, IQR 590-2692) (pcost of the CT-screening programme was excluded, there was no longer a statistically significant difference...

  12. Strategies to improve retention in randomised trials

    Science.gov (United States)

    Brueton, Valerie C; Tierney, Jayne; Stenning, Sally; Harding, Seeromanie; Meredith, Sarah; Nazareth, Irwin; Rait, Greta

    2013-01-01

    Background Loss to follow-up from randomised trials can introduce bias and reduce study power, affecting the generalisability, validity and reliability of results. Many strategies are used to reduce loss to follow-up and improve retention but few have been formally evaluated. Objectives To quantify the effect of strategies to improve retention on the proportion of participants retained in randomised trials and to investigate if the effect varied by trial strategy and trial setting. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, PreMEDLINE, EMBASE, PsycINFO, DARE, CINAHL, Campbell Collaboration's Social, Psychological, Educational and Criminological Trials Register, and ERIC. We handsearched conference proceedings and publication reference lists for eligible retention trials. We also surveyed all UK Clinical Trials Units to identify further studies. Selection criteria We included eligible retention trials of randomised or quasi-randomised evaluations of strategies to increase retention that were embedded in 'host' randomised trials from all disease areas and healthcare settings. We excluded studies aiming to increase treatment compliance. Data collection and analysis We contacted authors to supplement or confirm data that we had extracted. For retention trials, we recorded data on the method of randomisation, type of strategy evaluated, comparator, primary outcome, planned sample size, numbers randomised and numbers retained. We used risk ratios (RR) to evaluate the effectiveness of the addition of strategies to improve retention. We assessed heterogeneity between trials using the Chi2 and I2 statistics. For main trials that hosted retention trials, we extracted data on disease area, intervention, population, healthcare setting, sequence generation and allocation concealment. Main results We identified 38 eligible retention trials. Included trials evaluated six broad types of strategies to improve retention. These

  13. Recent big flare

    International Nuclear Information System (INIS)

    Moriyama, Fumio; Miyazawa, Masahide; Yamaguchi, Yoshisuke

    1978-01-01

    The features of three big solar flares observed at Tokyo Observatory are described in this paper. The active region, McMath 14943, caused a big flare on September 16, 1977. The flare appeared on both sides of a long dark line which runs along the boundary of the magnetic field. Two-ribbon structure was seen. The electron density of the flare observed at Norikura Corona Observatory was 3 x 10 12 /cc. Several arc lines which connect both bright regions of different magnetic polarity were seen in H-α monochrome image. The active region, McMath 15056, caused a big flare on December 10, 1977. At the beginning, several bright spots were observed in the region between two main solar spots. Then, the area and the brightness increased, and the bright spots became two ribbon-shaped bands. A solar flare was observed on April 8, 1978. At first, several bright spots were seen around the solar spot in the active region, McMath 15221. Then, these bright spots developed to a large bright region. On both sides of a dark line along the magnetic neutral line, bright regions were generated. These developed to a two-ribbon flare. The time required for growth was more than one hour. A bright arc which connects two ribbons was seen, and this arc may be a loop prominence system. (Kato, T.)

  14. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    CERN Multimedia

    2008-01-01

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  15. Aspirin plus dipyridamole versus aspirin alone after cerebral ischaemia of arterial origin (ESPRIT): randomised controlled trial.

    Science.gov (United States)

    Halkes, P H A; van Gijn, J; Kappelle, L J; Koudstaal, P J; Algra, A

    2006-05-20

    Results of trials of aspirin and dipyridamole combined versus aspirin alone for the secondary prevention of vascular events after ischaemic stroke of presumed arterial origin are inconsistent. Our aim was to resolve this uncertainty. We did a randomised controlled trial in which we assigned patients to aspirin (30-325 mg daily) with (n=1363) or without (n=1376) dipyridamole (200 mg twice daily) within 6 months of a transient ischaemic attack or minor stroke of presumed arterial origin. Our primary outcome event was the composite of death from all vascular causes, non-fatal stroke, non-fatal myocardial infarction, or major bleeding complication, whichever happened first. Treatment was open, but auditing of outcome events was blinded. Primary analysis was by intention to treat. This study is registered as an International Standard Randomised Controlled Trial (number ISRCTN73824458) and with (NCT00161070). Mean follow-up was 3.5 years (SD 2.0). Median aspirin dose was 75 mg in both treatment groups (range 30-325); extended-release dipyridamole was used by 83% (n=1131) of patients on the combination regimen. Primary outcome events arose in 173 (13%) patients on aspirin and dipyridamole and in 216 (16%) on aspirin alone (hazard ratio 0.80, 95% CI 0.66-0.98; absolute risk reduction 1.0% per year, 95% CI 0.1-1.8). Addition of the ESPRIT data to the meta-analysis of previous trials resulted in an overall risk ratio for the composite of vascular death, stroke, or myocardial infarction of 0.82 (95% CI 0.74-0.91). Patients on aspirin and dipyridamole discontinued trial medication more often than those on aspirin alone (470 vs 184), mainly because of headache. The ESPRIT results, combined with the results of previous trials, provide sufficient evidence to prefer the combination regimen of aspirin plus dipyridamole over aspirin alone as antithrombotic therapy after cerebral ischaemia of arterial origin.

  16. Randomised clinical trial

    DEFF Research Database (Denmark)

    Reimer, C; Lødrup, A; Smith, G

    2016-01-01

    of an alginate (Gaviscon Advance, Reckitt Benckiser, Slough, UK) on reflux symptoms in patients with persistent symptoms despite once daily PPI. MethodsThis was a multicentre, randomised, placebo-controlled, 7-day double-blind trial preceded by a 7-day run-in period. Reflux symptoms were assessed using...

  17. Inflated granularity: Spatial “Big Data” and geodemographics

    Directory of Open Access Journals (Sweden)

    Craig M Dalton

    2015-08-01

    Full Text Available Data analytics, particularly the current rhetoric around “Big Data”, tend to be presented as new and innovative, emerging ahistorically to revolutionize modern life. In this article, we situate one branch of Big Data analytics, spatial Big Data, through a historical predecessor, geodemographic analysis, to help develop a critical approach to current data analytics. Spatial Big Data promises an epistemic break in marketing, a leap from targeting geodemographic areas to targeting individuals. Yet it inherits characteristics and problems from geodemographics, including a justification through the market, and a process of commodification through the black-boxing of technology. As researchers develop sustained critiques of data analytics and its effects on everyday life, we must so with a grounding in the cultural and historical contexts from which data technologies emerged. This article and others (Barnes and Wilson, 2014 develop a historically situated, critical approach to spatial Big Data. This history illustrates connections to the critical issues of surveillance, redlining, and the production of consumer subjects and geographies. The shared histories and structural logics of spatial Big Data and geodemographics create the space for a continued critique of data analyses’ role in society.

  18. Estudiarán el Big Bang por Internet

    CERN Multimedia

    2007-01-01

    The most powerful Internet, star of the present, goes for another challenge that mixes past and future: to join the scientific world community to clarify the orígines of the universe, the Big Bang. (1/2 page)

  19. Big data analysis for smart farming

    NARCIS (Netherlands)

    Kempenaar, C.; Lokhorst, C.; Bleumer, E.J.B.; Veerkamp, R.F.; Been, Th.; Evert, van F.K.; Boogaardt, M.J.; Ge, L.; Wolfert, J.; Verdouw, C.N.; Bekkum, van Michael; Feldbrugge, L.; Verhoosel, Jack P.C.; Waaij, B.D.; Persie, van M.; Noorbergen, H.

    2016-01-01

    In this report we describe results of a one-year TO2 institutes project on the development of big data technologies within the milk production chain. The goal of this project is to ‘create’ an integration platform for big data analysis for smart farming and to develop a show case. This includes both

  20. A survey on Big Data Stream Mining

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Big Data can be static on one machine or distributed ... decision making, and process automation. Big data .... Concept Drifting: concept drifting mean the classifier .... transactions generated by a prefix tree structure. EstDec ...

  1. Emerging technology and architecture for big-data analytics

    CERN Document Server

    Chang, Chip; Yu, Hao

    2017-01-01

    This book describes the current state of the art in big-data analytics, from a technology and hardware architecture perspective. The presentation is designed to be accessible to a broad audience, with general knowledge of hardware design and some interest in big-data analytics. Coverage includes emerging technology and devices for data-analytics, circuit design for data-analytics, and architecture and algorithms to support data-analytics. Readers will benefit from the realistic context used by the authors, which demonstrates what works, what doesn’t work, and what are the fundamental problems, solutions, upcoming challenges and opportunities. Provides a single-source reference to hardware architectures for big-data analytics; Covers various levels of big-data analytics hardware design abstraction and flow, from device, to circuits and systems; Demonstrates how non-volatile memory (NVM) based hardware platforms can be a viable solution to existing challenges in hardware architecture for big-data analytics.

  2. Promoting Recruitment using Information Management Efficiently (PRIME): a stepped-wedge, cluster randomised trial of a complex recruitment intervention embedded within the REstart or Stop Antithrombotics Randomised Trial.

    Science.gov (United States)

    Maxwell, Amy E; Parker, Richard A; Drever, Jonathan; Rudd, Anthony; Dennis, Martin S; Weir, Christopher J; Al-Shahi Salman, Rustam

    2017-12-28

    Few interventions are proven to increase recruitment in clinical trials. Recruitment to RESTART, a randomised controlled trial of secondary prevention after stroke due to intracerebral haemorrhage, has been slower than expected. Therefore, we sought to investigate an intervention to boost recruitment to RESTART. We conducted a stepped-wedge, cluster randomised trial of a complex intervention to increase recruitment, embedded within the RESTART trial. The primary objective was to investigate if the PRIME complex intervention (a recruitment co-ordinator who conducts a recruitment review, provides access to bespoke stroke audit data exports, and conducts a follow-up review after 6 months) increases the recruitment rate to RESTART. We included 72 hospital sites located in England, Wales, or Scotland that were active in RESTART in June 2015. All sites began in the control state and were allocated using block randomisation stratified by hospital location (Scotland versus England/Wales) to start the complex intervention in one of 12 different months. The primary outcome was the number of patients randomised into RESTART per month per site. We quantified the effect of the complex intervention on the primary outcome using a negative binomial, mixed model adjusting for site, December/January months, site location, and background time trends in recruitment rate. We recruited and randomised 72 sites and recorded their monthly recruitment to RESTART over 24 months (March 2015 to February 2017 inclusive), providing 1728 site-months of observations for the primary analysis. The adjusted rate ratio for the number of patients randomised per month after allocation to the PRIME complex intervention versus control time before allocation to the PRIME complex intervention was 1.06 (95% confidence interval 0.55 to 2.03, p = 0.87). Although two thirds of respondents to the 6-month follow-up questionnaire agreed that the audit reports were useful, only six patients were reported to

  3. What do Big Data do in Global Governance?

    DEFF Research Database (Denmark)

    Krause Hansen, Hans; Porter, Tony

    2017-01-01

    Two paradoxes associated with big data are relevant to global governance. First, while promising to increase the capacities of humans in governance, big data also involve an increasingly independent role for algorithms, technical artifacts, the Internet of things, and other objects, which can...... reduce the control of human actors. Second, big data involve new boundary transgressions as data are brought together from multiple sources while also creating new boundary conflicts as powerful actors seek to gain advantage by controlling big data and excluding competitors. These changes are not just...... about new data sources for global decision-makers, but instead signal more profound changes in the character of global governance....

  4. 40 CFR 98.413 - Calculating GHG emissions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Calculating GHG emissions. 98.413 Section 98.413 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.413 Calculating...

  5. 40 CFR 98.416 - Data reporting requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data reporting requirements. 98.416 Section 98.416 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.416 Data...

  6. Big Data in Caenorhabditis elegans: quo vadis?

    Science.gov (United States)

    Hutter, Harald; Moerman, Donald

    2015-11-05

    A clear definition of what constitutes "Big Data" is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of "complete" data sets for this organism is actually rather small--not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein-protein interaction--important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. © 2015 Hutter and Moerman. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  7. 76 FR 7810 - Big Horn County Resource Advisory Committee

    Science.gov (United States)

    2011-02-11

    ..., Wyoming 82801. Comments may also be sent via e-mail to [email protected] , with the words Big... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  8. Hot big bang or slow freeze?

    Energy Technology Data Exchange (ETDEWEB)

    Wetterich, C.

    2014-09-07

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  9. Hot big bang or slow freeze?

    International Nuclear Information System (INIS)

    Wetterich, C.

    2014-01-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe

  10. Hot big bang or slow freeze?

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2014-09-01

    Full Text Available We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  11. Big Data: Survey, Technologies, Opportunities, and Challenges

    Directory of Open Access Journals (Sweden)

    Nawsher Khan

    2014-01-01

    Full Text Available Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  12. 45 CFR 98.20 - A child's eligibility for child care services.

    Science.gov (United States)

    2010-10-01

    ..., ethnic background, sex, religious affiliation, or disability; (2) Limit parental rights provided under... 45 Public Welfare 1 2010-10-01 2010-10-01 false A child's eligibility for child care services. 98.20 Section 98.20 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD...

  13. Pre-big bang cosmology and quantum fluctuations

    International Nuclear Information System (INIS)

    Ghosh, A.; Pollifrone, G.; Veneziano, G.

    2000-01-01

    The quantum fluctuations of a homogeneous, isotropic, open pre-big bang model are discussed. By solving exactly the equations for tensor and scalar perturbations we find that particle production is negligible during the perturbative Pre-Big Bang phase

  14. Toward effective software solutions for big biology

    NARCIS (Netherlands)

    Prins, Pjotr; de Ligt, Joep; Tarasov, Artem; Jansen, Ritsert C; Cuppen, Edwin; Bourne, Philip E

    2015-01-01

    Leading scientists tell us that the problem of large data and data integration, referred to as 'big data', is acute and hurting research. Recently, Snijder et al.1 suggested a culture change in which scientists would aim to share high-dimensional data among laboratories. It is important to realize

  15. Challenges of Big Data in Educational Assessment

    Science.gov (United States)

    Gibson, David C.; Webb, Mary; Ifenthaler, Dirk

    2015-01-01

    This paper briefly discusses four measurement challenges of data science or "big data" in educational assessments that are enabled by technology: 1. Dealing with change over time via time-based data. 2. How a digital performance space's relationships interact with learner actions, communications and products. 3. How layers of…

  16. Identify too big to fail banks and capital insurance: An equilibrium approach

    Directory of Open Access Journals (Sweden)

    Katerina Ivanov

    2017-09-01

    Full Text Available The objective of this paper is develop a rational expectation equilibrium model of capital insurance to identify too big to fail banks. The main results of this model include (1 too big to fail banks can be identified explicitly by a systemic risk measure, loss betas, of all banks in the entire financial sector; (2 the too big to fail feature can be largely justified by a high level of loss beta; (3 the capital insurance proposal benefits market participants and reduces the systemic risk; (4 the implicit guarantee subsidy can be estimated endogenously; and lastly, (5 the capital insurance proposal can be used to resolve the moral hazard issue. We implement this model and document that the too big to fail issue has been considerably reduced in the pro-crisis period. As a result, the capital insurance proposal could be a useful macro-regulation innovation policy tool

  17. Baryon symmetric big bang cosmology

    Science.gov (United States)

    Stecker, F. W.

    1978-01-01

    Both the quantum theory and Einsteins theory of special relativity lead to the supposition that matter and antimatter were produced in equal quantities during the big bang. It is noted that local matter/antimatter asymmetries may be reconciled with universal symmetry by assuming (1) a slight imbalance of matter over antimatter in the early universe, annihilation, and a subsequent remainder of matter; (2) localized regions of excess for one or the other type of matter as an initial condition; and (3) an extremely dense, high temperature state with zero net baryon number; i.e., matter/antimatter symmetry. Attention is given to the third assumption, which is the simplest and the most in keeping with current knowledge of the cosmos, especially as pertains the universality of 3 K background radiation. Mechanisms of galaxy formation are discussed, whereby matter and antimatter might have collided and annihilated each other, or have coexisted (and continue to coexist) at vast distances. It is pointed out that baryon symmetric big bang cosmology could probably be proved if an antinucleus could be detected in cosmic radiation.

  18. Analysis of Big Data Maturity Stage in Hospitality Industry

    OpenAIRE

    Shabani, Neda; Munir, Arslan; Bose, Avishek

    2017-01-01

    Big data analytics has an extremely significant impact on many areas in all businesses and industries including hospitality. This study aims to guide information technology (IT) professionals in hospitality on their big data expedition. In particular, the purpose of this study is to identify the maturity stage of the big data in hospitality industry in an objective way so that hotels be able to understand their progress, and realize what it will take to get to the next stage of big data matur...

  19. A Multidisciplinary Perspective of Big Data in Management Research

    OpenAIRE

    Sheng, Jie; Amankwah-Amoah, J.; Wang, X.

    2017-01-01

    In recent years, big data has emerged as one of the prominent buzzwords in business and management. In spite of the mounting body of research on big data across the social science disciplines, scholars have offered little synthesis on the current state of knowledge. To take stock of academic research that contributes to the big data revolution, this paper tracks scholarly work's perspectives on big data in the management domain over the past decade. We identify key themes emerging in manageme...

  20. A matrix big bang

    International Nuclear Information System (INIS)

    Craps, Ben; Sethi, Savdeep; Verlinde, Erik

    2005-01-01

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control