WorldWideScience

Sample records for claims database analysis

  1. Prevalence rates for depression by industry: a claims database analysis.

    Science.gov (United States)

    Wulsin, Lawson; Alterman, Toni; Timothy Bushnell, P; Li, Jia; Shen, Rui

    2014-11-01

    To estimate and interpret differences in depression prevalence rates among industries, using a large, group medical claims database. Depression cases were identified by ICD-9 diagnosis code in a population of 214,413 individuals employed during 2002-2005 by employers based in western Pennsylvania. Data were provided by Highmark, Inc. (Pittsburgh and Camp Hill, PA). Rates were adjusted for age, gender, and employee share of health care costs. National industry measures of psychological distress, work stress, and physical activity at work were also compiled from other data sources. Rates for clinical depression in 55 industries ranged from 6.9 to 16.2 %, (population rate = 10.45 %). Industries with the highest rates tended to be those which, on the national level, require frequent or difficult interactions with the public or clients, and have high levels of stress and low levels of physical activity. Additional research is needed to help identify industries with relatively high rates of depression in other regions and on the national level, and to determine whether these differences are due in part to specific work stress exposures and physical inactivity at work. Claims database analyses may provide a cost-effective way to identify priorities for depression treatment and prevention in the workplace.

  2. Prevalence rates for depression by industry: a claims database analysis

    Science.gov (United States)

    Alterman, Toni; Bushnell, P. Timothy; Li, Jia; Shen, Rui

    2015-01-01

    Purpose To estimate and interpret differences in depression prevalence rates among industries, using a large, group medical claims database. Methods Depression cases were identified by ICD-9 diagnosis code in a population of 214,413 individuals employed during 2002–2005 by employers based in western Pennsylvania. Data were provided by Highmark, Inc. (Pittsburgh and Camp Hill, PA). Rates were adjusted for age, gender, and employee share of health care costs. National industry measures of psychological distress, work stress, and physical activity at work were also compiled from other data sources. Results Rates for clinical depression in 55 industries ranged from 6.9 to 16.2 %, (population rate = 10.45 %). Industries with the highest rates tended to be those which, on the national level, require frequent or difficult interactions with the public or clients, and have high levels of stress and low levels of physical activity. Conclusions Additional research is needed to help identify industries with relatively high rates of depression in other regions and on the national level, and to determine whether these differences are due in part to specific work stress exposures and physical inactivity at work. Clinical significance Claims database analyses may provide a cost-effective way to identify priorities for depression treatment and prevention in the workplace. PMID:24907896

  3. Treatment patterns and health care resource utilization associated with dalfampridine extended release in multiple sclerosis: a retrospective claims database analysis

    Directory of Open Access Journals (Sweden)

    Guo A

    2016-05-01

    Full Text Available Amy Guo,1 Michael Grabner,2 Swetha Rao Palli,2 Jessica Elder,1 Matthew Sidovar,1 Peter Aupperle,1 Stephen Krieger3 1Acorda Therapeutics Inc., Ardsley, New York, NY, USA; 2HealthCore Inc., Wilmington, DE, USA; 3Corinne Goldsmith Dickinson Center for MS, Icahn School of Medicine at Mount Sinai, New York, NY, USA Background: Although previous studies have demonstrated the clinical benefits of dalfampridine extended release (D-ER tablets in patients with multiple sclerosis (MS, there are limited real-world data on D-ER utilization and associated outcomes in patients with MS. Purpose: The objective of this study was to evaluate treatment patterns, budget impact, and health care resource utilization (HRU associated with D-ER use in a real-world setting. Methods: A retrospective claims database analysis was conducted using the HealthCore Integrated Research DatabaseSM. Adherence (measured by medication possession ratio, or [MPR] and persistence (measured by days between initial D-ER claim and discontinuation or end of follow-up were evaluated over 1-year follow-up. Budget impact was calculated as cost per member per month (PMPM over the available follow-up period. D-ER and control cohorts were propensity-score matched on baseline demographics, comorbidities, and MS-related resource utilization to compare walking-impairment-related HRU over follow-up. Results: Of the 2,138 MS patients identified, 1,200 were not treated with D-ER (control and 938 were treated with D-ER. Patients were aged 51 years on average and 74% female. Approximately 82.6% of D-ER patients were adherent (MPR >80%. The estimated budget impact range of D-ER was $0.014–$0.026 PMPM. Propensity-score-matched D-ER and controls yielded 479 patients in each cohort. Postmatching comparison showed that the D-ER cohort was associated with fewer physician (21.5% vs 62.4%, P<0.0001 and other outpatient visits (22.8% vs 51.4%, P<0.0001 over the 12-month follow-up. Changes in HRU from follow

  4. Persistence with weekly and monthly bisphosphonates among postmenopausal women: analysis of a US pharmacy claims administrative database

    Directory of Open Access Journals (Sweden)

    Fan T

    2013-11-01

    Full Text Available Tao Fan, Qiaoyi Zhang, Shuvayu S Sen Global Health Outcomes, Merck, Whitehouse Station, NJ, USA Background: Bisphosphonates are available in daily, weekly, and monthly dosing formulations to treat postmenopausal osteoporosis. Some researchers suggested that adherence to monthly bisphosphonate might be different from that with weekly or daily bisphosphonate because of different dosing regimens. However, the actual persistency rates in regular practice settings are unknown. Objectives: To compare persistence rates with alendronate 70 mg once weekly (AOW, risedronate 35 mg once weekly (ROW, and ibandronate 150 mg once monthly (IOM in a US pharmacy claims database. Methods: In this retrospective cohort study, pharmacy claims data of patients with new bisphosphonate prescriptions were extracted for women aged ≥ 50 years who had an AOW, ROW, or IOM prescription (index prescription between December 30, 2004 and May 31, 2005 (the index period and did not have the index Rx during the previous 12 months. Patients’ records were reviewed for at least 5 months from their index date to November 2, 2005 (the follow-up period. Patients were considered persistent if they neither discontinued (failed to refill the index Rx within a 45-day period following the last supply day of the previous dispensing nor switched (changed to another bisphosphonate during the follow-up period. Medication-possession ratio was defined as days with index prescription supplies/total days of follow-up. Results: Among 44,635 patients, 25,207 (56.5% received prescriptions of AOW, 18,689 (41.9% ROW, and 739 (1.7% IOM as the index prescription. In all, 35.1% of AOW patients, 32.5% of ROW patients, and 30.4% of IOM patients (P < 0.0001 AOW vs ROW or IOM had persisted with their initial therapy, whereas 64.0% of AOW, 66.4% of ROW, and 68.2% of IOM patients discontinued (P < 0.0001 during follow-up. The medication-possession ratio (days with index prescription supplies/total days of

  5. Incidence of catheter-related complications in patients with central venous or hemodialysis catheters: a health care claims database analysis.

    Science.gov (United States)

    Napalkov, Pavel; Felici, Diana M; Chu, Laura K; Jacobs, Joan R; Begelman, Susan M

    2013-10-16

    Central venous catheter (CVC) and hemodialysis (HD) catheter usage are associated with complications that occur during catheter insertion, dwell period, and removal. This study aims to identify and describe the incidence rates of catheter-related complications in a large patient population in a United States-based health care claims database after CVC or HD catheter placement. Patients in the i3 InVision DataMart® health care claims database with at least 1 CVC or HD catheter insertion claim were categorized into CVC or HD cohorts using diagnostic and procedural codes from the US Renal Data System, American College of Surgeons, and American Medical Association's Physician Performance Measures. Catheter-related complications were identified using published diagnostic and procedural codes. Incidence rates (IRs)/1000 catheter-days were calculated for complications including catheter-related bloodstream infections (CRBSIs), thrombosis, embolism, intracranial hemorrhage (ICH), major bleeding (MB), and mechanical catheter-related complications (MCRCs). Thirty percent of the CVC cohort and 54% of the HD cohort had catheter placements lasting <90 days. Catheter-related complications occurred most often during the first 90 days of catheter placement. IRs were highest for CRBSIs in both cohorts (4.0 [95% CI, 3.7-4.3] and 5.1 [95% CI, 4.7-5.6], respectively). Other IRs in CVC and HD cohorts, respectively, were thrombosis, 1.3 and 0.8; MCRCs, 0.6 and 0.7; embolism, 0.4 and 0.5; MB, 0.1 and 0.3; and ICH, 0.1 in both cohorts. Patients with cancer at baseline had significantly higher IRs for CRBSIs and thrombosis than non-cancer patients. CVC or HD catheter-related complications were most frequently seen in patients 16 years or younger. The risk of catheter-related complications is highest during the first 90 days of catheter placement in patients with CVCs and HD catheters and in younger patients (≤16 years of age) with HD catheters. Data provided in this study can be applied

  6. Treatment patterns and healthcare resource utilization and costs in heavy menstrual bleeding: a Japanese claims database analysis.

    Science.gov (United States)

    Akiyama, Sayako; Tanaka, Erika; Cristeau, Olivier; Onishi, Yoshie; Osuga, Yutaka

    2018-06-01

    Heavy menstrual bleeding (HMB) is a highly prevalent condition, characterized by excessive menstrual blood loss and cramping, that interferes with activities of daily life. The aim of this study was to investigate treatment patterns in HMB in Japan, and to assess healthcare resource utilization and costs among women newly-diagnosed with the condition. This study retrospectively analyzed health insurance data available in the Japan Medical Data Center (JMDC) database on women aged 18-49 years who were newly-diagnosed with primary or secondary HMB. Treatment patterns were analyzed, and healthcare utilization and costs were evaluated and compared to matched controls. The study included a total of 635 patients, 210 with primary HMB and 425 with secondary HMB. In the primary HMB cohort, 60.0% of patients received one or more pharmacological or surgical treatments, compared with 76.2% in the secondary HMB cohort. The most commonly prescribed medications in all patients were hemostatic agents (28.7%), traditional Chinese medicine (TCM) (12.1%), and low-dose estrogen progestins (LEPs) (10.1%). After adjustment for patient baseline characteristics, healthcare costs were 1.93-times higher in primary HMB cases (p < .0001) and 4.44-times higher in secondary HMB cases (p < .0001) vs healthy controls. Outpatient care was the main cost driver. The main limitations of this study are related to its retrospective nature, and the fact that only reimbursed medications were captured in the source database. A substantial proportion of HMB patients did not receive the recommended treatments. Healthcare costs were considerably increased in the presence of an HMB diagnosis.

  7. Operating room fires: a closed claims analysis.

    Science.gov (United States)

    Mehta, Sonya P; Bhananker, Sanjay M; Posner, Karen L; Domino, Karen B

    2013-05-01

    To assess patterns of injury and liability associated with operating room (OR) fires, closed malpractice claims in the American Society of Anesthesiologists Closed Claims Database since 1985 were reviewed. All claims related to fires in the OR were compared with nonfire-related surgical anesthesia claims. An analysis of fire-related claims was performed to identify causative factors. There were 103 OR fire claims (1.9% of 5,297 surgical claims). Electrocautery was the ignition source in 90% of fire claims. OR fire claims more frequently involved older outpatients compared with other surgical anesthesia claims (P fire claims (P fires (n = 93) increased over time (P fires occurred during head, neck, or upper chest procedures (high-fire-risk procedures). Oxygen served as the oxidizer in 95% of electrocautery-induced OR fires (84% with open delivery system). Most electrocautery-induced fires (n = 75, 81%) occurred during monitored anesthesia care. Oxygen was administered via an open delivery system in all high-risk procedures during monitored anesthesia care. In contrast, alcohol-containing prep solutions and volatile compounds were present in only 15% of OR fires during monitored anesthesia care. Electrocautery-induced fires during monitored anesthesia care were the most common cause of OR fires claims. Recognition of the fire triad (oxidizer, fuel, and ignition source), particularly the critical role of supplemental oxygen by an open delivery system during use of the electrocautery, is crucial to prevent OR fires. Continuing education and communication among OR personnel along with fire prevention protocols in high-fire-risk procedures may reduce the occurrence of OR fires.

  8. Drug usage patterns and treatment costs in newly-diagnosed type 2 diabetes mellitus cases, 2007 vs 2012: findings from a large US healthcare claims database analysis.

    Science.gov (United States)

    Weng, W; Liang, Y; Kimball, E S; Hobbs, T; Kong, S; Sakurada, B; Bouchard, J

    2016-07-01

    Objective To explore trends in demographics, comorbidities, anti-diabetic drug usage, and healthcare utilization costs in patients with newly-diagnosed type 2 diabetes mellitus (T2DM) using a large US claims database. Methods For the years 2007 and 2012, Truven Health Marketscan Research Databases were used to identify adults with newly-diagnosed T2DM and continuous 12-month enrollment with prescription benefits. Variables examined included patient demographics, comorbidities, inpatient utilization patterns, healthcare costs (inpatient and outpatient), drug costs, and diabetes drug claim patterns. Results Despite an increase in the overall database population between 2007-2012, the incidence of newly-diagnosed T2DM decreased from 1.1% (2007) to 0.65% (2012). Hyperlipidemia and hypertension were the most common comorbidities and increased in prevalence from 2007 to 2012. In 2007, 48.3% of newly-diagnosed T2DM patients had no claims for diabetes medications, compared with 36.2% of patients in 2012. The use of a single oral anti-diabetic drug (OAD) was the most common diabetes medication-related claim (46.2% of patients in 2007; 56.7% of patients in 2012). Among OAD monotherapy users, metformin was the most commonly used and increased from 2007 (74.7% of OAD monotherapy users) to 2012 (90.8%). Decreases were observed for sulfonylureas (14.1% to 6.2%) and thiazolidinediones (7.3% to 0.6%). Insulin, predominantly basal insulin, was used by 3.9% of patients in 2007 and 5.3% of patients in 2012. Mean total annual healthcare costs increased from $13,744 in 2007 to $15,175 in 2012, driven largely by outpatient services, although costs in all individual categories of healthcare services (inpatient and outpatient) increased. Conversely, total drug costs per patient were lower in 2012 compared with 2007. Conclusions Despite a drop in the rate of newly-diagnosed T2DM from 2007 to 2012 in the US, increased total medical costs and comorbidities per individual patient suggest that

  9. Analysis of the evidence-practice gap to facilitate proper medical care for the elderly: investigation, using databases, of utilization measures for National Database of Health Insurance Claims and Specific Health Checkups of Japan (NDB).

    Science.gov (United States)

    Nakayama, Takeo; Imanaka, Yuichi; Okuno, Yasushi; Kato, Genta; Kuroda, Tomohiro; Goto, Rei; Tanaka, Shiro; Tamura, Hiroshi; Fukuhara, Shunichi; Fukuma, Shingo; Muto, Manabu; Yanagita, Motoko; Yamamoto, Yosuke

    2017-06-06

    As Japan becomes a super-aging society, presentation of the best ways to provide medical care for the elderly, and the direction of that care, are important national issues. Elderly people have multi-morbidity with numerous medical conditions and use many medical resources for complex treatment patterns. This increases the likelihood of inappropriate medical practices and an evidence-practice gap. The present study aimed to: derive findings that are applicable to policy from an elucidation of the actual state of medical care for the elderly; establish a foundation for the utilization of National Database of Health Insurance Claims and Specific Health Checkups of Japan (NDB), and present measures for the utilization of existing databases in parallel with NDB validation.Cross-sectional and retrospective cohort studies were conducted using the NDB built by the Ministry of Health, Labor and Welfare of Japan, private health insurance claims databases, and the Kyoto University Hospital database (including related hospitals). Medical practices (drug prescription, interventional procedures, testing) related to four issues-potential inappropriate medication, cancer therapy, chronic kidney disease treatment, and end-of-life care-will be described. The relationships between these issues and clinical outcomes (death, initiation of dialysis and other adverse events) will be evaluated, if possible.

  10. Claims-based definition of death in Japanese claims database: validity and implications.

    Science.gov (United States)

    Ooba, Nobuhiro; Setoguchi, Soko; Ando, Takashi; Sato, Tsugumichi; Yamaguchi, Takuhiro; Mochizuki, Mayumi; Kubota, Kiyoshi

    2013-01-01

    For the pending National Claims Database in Japan, researchers will not have access to death information in the enrollment files. We developed and evaluated a claims-based definition of death. We used healthcare claims and enrollment data between January 2005 and August 2009 for 195,193 beneficiaries aged 20 to 74 in 3 private health insurance unions. We developed claims-based definitions of death using discharge or disease status and Charlson comorbidity index (CCI). We calculated sensitivity, specificity and positive predictive values (PPVs) using the enrollment data as a gold standard in the overall population and subgroups divided by demographic and other factors. We also assessed bias and precision in two example studies where an outcome was death. The definition based on the combination of discharge/disease status and CCI provided moderate sensitivity (around 60%) and high specificity (99.99%) and high PPVs (94.8%). In most subgroups, sensitivity of the preferred definition was also around 60% but varied from 28 to 91%. In an example study comparing death rates between two anticancer drug classes, the claims-based definition provided valid and precise hazard ratios (HRs). In another example study comparing two classes of anti-depressants, the HR with the claims-based definition was biased and had lower precision than that with the gold standard definition. The claims-based definitions of death developed in this study had high specificity and PPVs while sensitivity was around 60%. The definitions will be useful in future studies when used with attention to the possible fluctuation of sensitivity in some subpopulations.

  11. Claims-Based Definition of Death in Japanese Claims Database: Validity and Implications

    Science.gov (United States)

    Ooba, Nobuhiro; Setoguchi, Soko; Ando, Takashi; Sato, Tsugumichi; Yamaguchi, Takuhiro; Mochizuki, Mayumi; Kubota, Kiyoshi

    2013-01-01

    Background For the pending National Claims Database in Japan, researchers will not have access to death information in the enrollment files. We developed and evaluated a claims-based definition of death. Methodology/Principal Findings We used healthcare claims and enrollment data between January 2005 and August 2009 for 195,193 beneficiaries aged 20 to 74 in 3 private health insurance unions. We developed claims-based definitions of death using discharge or disease status and Charlson comorbidity index (CCI). We calculated sensitivity, specificity and positive predictive values (PPVs) using the enrollment data as a gold standard in the overall population and subgroups divided by demographic and other factors. We also assessed bias and precision in two example studies where an outcome was death. The definition based on the combination of discharge/disease status and CCI provided moderate sensitivity (around 60%) and high specificity (99.99%) and high PPVs (94.8%). In most subgroups, sensitivity of the preferred definition was also around 60% but varied from 28 to 91%. In an example study comparing death rates between two anticancer drug classes, the claims-based definition provided valid and precise hazard ratios (HRs). In another example study comparing two classes of anti-depressants, the HR with the claims-based definition was biased and had lower precision than that with the gold standard definition. Conclusions/Significance The claims-based definitions of death developed in this study had high specificity and PPVs while sensitivity was around 60%. The definitions will be useful in future studies when used with attention to the possible fluctuation of sensitivity in some subpopulations. PMID:23741526

  12. Linked Patient-Reported Outcomes Data From Patients With Multiple Sclerosis Recruited on an Open Internet Platform to Health Care Claims Databases Identifies a Representative Population for Real-Life Data Analysis in Multiple Sclerosis.

    Science.gov (United States)

    Risson, Valery; Ghodge, Bhaskar; Bonzani, Ian C; Korn, Jonathan R; Medin, Jennie; Saraykar, Tanmay; Sengupta, Souvik; Saini, Deepanshu; Olson, Melvin

    2016-09-22

    An enormous amount of information relevant to public health is being generated directly by online communities. To explore the feasibility of creating a dataset that links patient-reported outcomes data, from a Web-based survey of US patients with multiple sclerosis (MS) recruited on open Internet platforms, to health care utilization information from health care claims databases. The dataset was generated by linkage analysis to a broader MS population in the United States using both pharmacy and medical claims data sources. US Facebook users with an interest in MS were alerted to a patient-reported survey by targeted advertisements. Eligibility criteria were diagnosis of MS by a specialist (primary progressive, relapsing-remitting, or secondary progressive), ≥12-month history of disease, age 18-65 years, and commercial health insurance. Participants completed a questionnaire including data on demographic and disease characteristics, current and earlier therapies, relapses, disability, health-related quality of life, and employment status and productivity. A unique anonymous profile was generated for each survey respondent. Each anonymous profile was linked to a number of medical and pharmacy claims datasets in the United States. Linkage rates were assessed and survey respondents' representativeness was evaluated based on differences in the distribution of characteristics between the linked survey population and the general MS population in the claims databases. The advertisement was placed on 1,063,973 Facebook users' pages generating 68,674 clicks, 3719 survey attempts, and 651 successfully completed surveys, of which 440 could be linked to any of the claims databases for 2014 or 2015 (67.6% linkage rate). Overall, no significant differences were found between patients who were linked and not linked for educational status, ethnicity, current or prior disease-modifying therapy (DMT) treatment, or presence of a relapse in the last 12 months. The frequencies of the

  13. Strategy for a transparent, accessible, and sustainable national claims database.

    Science.gov (United States)

    Gelburd, Robin

    2015-03-01

    The article outlines the strategy employed by FAIR Health, Inc, an independent nonprofit, to maintain a national database of over 18 billion private health insurance claims to support consumer education, payer and provider operations, policy makers, and researchers with standard and customized data sets on an economically self-sufficient basis. It explains how FAIR Health conducts all operations in-house, including data collection, security, validation, information organization, product creation, and transmission, with a commitment to objectivity and reliability in data and data products. It also describes the data elements available to researchers and the diverse studies that FAIR Health data facilitate.

  14. Database and Registry Research in Orthopaedic Surgery: Part I: Claims-Based Data.

    Science.gov (United States)

    Pugely, Andrew J; Martin, Christopher T; Harwood, Jared; Ong, Kevin L; Bozic, Kevin J; Callaghan, John J

    2015-08-05

    The use of large-scale national databases for observational research in orthopaedic surgery has grown substantially in the last decade, and the data sets can be grossly categorized as either administrative claims or clinical registries. Administrative claims data comprise the billing records associated with the delivery of health-care services. Orthopaedic researchers have used both government and private claims to describe temporal trends, geographic variation, disparities, complications, outcomes, and resource utilization associated with both musculoskeletal disease and treatment. Medicare claims comprise one of the most robust data sets used to perform orthopaedic research, with >45 million beneficiaries. The U.S. government, through the Centers for Medicare & Medicaid Services, often uses these data to drive changes in health policy. Private claims data used in orthopaedic research often comprise more heterogeneous patient demographic samples, but allow longitudinal analysis similar to that offered by Medicare claims. Discharge databases, such as the U.S. National Inpatient Sample, provide a wide national sampling of inpatient hospital stays from all payers and allow analysis of associated adverse events and resource utilization. Administrative claims data benefit from the high patient numbers obtained through a majority of hospitals. Using claims, it is possible to follow patients longitudinally throughout encounters irrespective of the location of the institution delivering health care. Some disadvantages include lack of precision of ICD-9 (International Classification of Diseases, Ninth Revision) coding schemes. Much of these data are expensive to purchase, complicated to organize, and labor-intensive to manipulate--often requiring trained specialists for analysis. Given the changing health-care environment, it is likely that databases will provide valuable information that has the potential to influence clinical practice improvement and health policy for

  15. Twelve-month discontinuation rates of levonorgestrel intrauterine system 13.5 mg and subdermal etonogestrel implant in women aged 18-44: A retrospective claims database analysis.

    Science.gov (United States)

    Law, Amy; Liao, Laura; Lin, Jay; Yaldo, Avin; Lynen, Richard

    2018-04-21

    To investigate the 12-month discontinuation rates of levonorgestrel intrauterine system 13.5 mg (LNG-IUS 13.5) and subdermal etonogestrel (ENG) implant in the US. We identified women aged 18-44 who had an insertion of LNG-IUS 13.5 or ENG implant from the MarketScan Commercial claims database (7/1/2013-9/30/2014). Women were required to have 12 months of continuous insurance coverage prior to the insertion (baseline) and at least 12-months after (follow-up). Discontinuation was defined as presence of an insurance claim for pregnancy-related services, hysterectomy, female sterilization, a claim for another contraceptive method, or removal of the index contraceptive without re-insertion within 30 days. Using Cox regression we examined the potential impact of ENG implant vs. LNG-IUS 13.5 on the likelihood for discontinuation after controlling for patient characteristics. A total of 3680 (mean age: 25.4 years) LNG-IUS 13.5 and 23,770 (mean age: 24.6 years) ENG implant users met the selection criteria. Prior to insertion, 56.6% of LNG-IUS 13.5 and 42.1% of ENG implant users had used contraceptives, with oral contraceptives being most common (LNG-IUS 13.5: 42.1%; ENG implant: 28.5%). Among users of LNG-IUS 13.5 and ENG implant, rates of discontinuation were similar during the 12-month follow-up (LNG-IUS 13.5: 24.9%; ENG implant: 24.0%). Regression results showed that women using LNG-IUS 13.5 vs. ENG implant had similar likelihood for discontinuation (hazard ratio: 0.97, 95% confidence interval: 0.90-1.05, p=.41). In the real-world US setting, women aged 18-44 using LNG-IUS 13.5 and ENG implant have similar discontinuation rates after 12 months. In the United States, women aged 18-44 using levonorgestrel intrauterine system (13.5 mg) and subdermal etonogestrel implant have similar discontinuation rates after 12 months. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Characteristics, Treatment Patterns, and Economic Outcomes of Patients Initiating Injectable Medications for Management of Type 2 Diabetes Mellitus in Japan: Results from a Retrospective Claims Database Analysis.

    Science.gov (United States)

    Suzuki, Shuichi; Desai, Urvi; Strizek, Alena; Ivanova, Jasmina; Garcia-Horton, Viviana; Cai, Zhihong; Schmerold, Luke; Liu, Xinyue; Perez-Nieves, Magaly

    2018-04-16

    This study's objective was to describe characteristics, treatment patterns, and economic outcomes of type 2 diabetes mellitus (T2DM) patients initiating injectable antidiabetic medications in Japan. Adults (≥ 18 years) with T2DM, ≥ 2 claims for injectable antidiabetics between 1 August 2011 and 31 July 2015 (first claim = index date), no evidence of type 1 diabetes mellitus, ≤ 1 claim for insulin, no claims for GLP-1RA before index, and continuous enrollment for 6 months before (baseline) and 12 months after index (follow-up) were selected from the Japan Medical Center Database. Patient characteristics and outcomes during the baseline and follow-up periods were described overall and by provider, using the proxy setting of index medication [hospital (including outpatient departments) for specialists; clinic for general practitioner (GP)]. Of the 2683 patients included (mean age: 50 years, 67% male), 1879 (70%) initiated injectable antidiabetics with specialists and 804 (30%) with GPs. The specialist cohort had a significantly greater comorbidity burden, but lower HbA1c levels during baseline, and was more likely to receive intensified treatment at index than the GP cohort. Almost 40% of patients (almost 30% of GP cohort) did not use antidiabetics during baseline; the remaining patients received oral medications, primarily from GPs. During follow-up, patients used the index medication for approximately 7 months. Independent of specialist vs. GP setting, patients received antidiabetics and medications for T2DM-related comorbidities and complications during the baseline and follow-up periods from the same provider, primarily GPs. The overall average healthcare costs were ¥350,404 during baseline and ¥1,856,727 during follow-up. In Japan, most T2DM patients initiated injectable antidiabetics with specialists vs. GPs. There were considerable differences in characteristics of patients treated by specialists vs. GPs. After initiation, injectable

  17. Do falls and falls-injuries in hospital indicate negligent care -- and how big is the risk? A retrospective analysis of the NHS Litigation Authority Database of clinical negligence claims, resulting from falls in hospitals in England 1995 to 2006.

    Science.gov (United States)

    Oliver, D; Killick, S; Even, T; Willmott, M

    2008-12-01

    Accidental falls are very common in older hospital patients -- accounting for 32% of reported adult patient safety incidents in UK National Health Service (NHS) hospitals and occurring with similar frequency in settings internationally. In countries where the population is ageing, and care is provided in inpatient settings, falls prevention is therefore a significant and growing risk-management issue. Falls may lead to a variety of harms and costs, are cited in formal complaints and can lead to claims of clinical negligence. The NHS Litigation Authority (NHSLA) negligence claims database provides a novel opportunity to systematically analyse such (falls-related) claims made against NHS organisations in England and to learn lessons for risk-management systems and claims recording. To describe the circumstances and injuries most frequently cited in falls-related claims; to investigate any association between the financial impact (total cost), and the circumstances of or injuries resulting from falls in "closed" claims; to draw lessons for falls risk management and for future data capture on falls incidents and resulting claims analysis; to identify priorities for future research. A keyword search was run on the NHSLA claims database for April 1995 to February 2006, to identify all claims apparently relating to falls. Claims were excluded from further analysis if, on scrutiny, they had not resulted from falls, or if they were still "open" (ie, unresolved). From the narrative descriptions of closed claims (ie, those for which the financial outcome was known), we developed categories of "principal" and "secondary" injury/harm and "principal" and "contributory" circumstance of falls. For each category, it was determined whether cases had resulted in payment and what total payments (damages and costs) were awarded. The proportions of contribution-specific injuries or circumstances to the number of cases and to the overall costs incurred were compared in order to identify

  18. One-year risk of psychiatric hospitalization and associated treatment costs in bipolar disorder treated with atypical antipsychotics: a retrospective claims database analysis

    Directory of Open Access Journals (Sweden)

    Pikalov Andrei

    2011-01-01

    Full Text Available Abstract Background This study compared 1-year risk of psychiatric hospitalization and treatment costs in commercially insured patients with bipolar disorder, treated with aripiprazole, ziprasidone, olanzapine, quetiapine or risperidone. Methods This was a retrospective propensity score-matched cohort study using the Ingenix Lab/Rx integrated insurance claims dataset. Patients with bipolar disorder and 180 days of pre-index enrollment without antipsychotic exposure who received atypical antipsychotic agents were followed for up to 12 months following the initial antipsychotic prescription. The primary analysis used Cox proportional hazards regression to evaluate time-dependent risk of hospitalization, adjusting for age, sex and pre-index hospitalization. Generalized gamma regression compared post-index costs between treatment groups. Results Compared to aripiprazole, ziprasidone, olanzapine and quetiapine had higher risks for hospitalization (hazard ratio 1.96, 1.55 and 1.56, respectively; p Conclusions In commercially insured adults with bipolar disorder followed for 1 year after initiation of atypical antipsychotics, treatment with aripiprazole was associated with a lower risk of psychiatric hospitalization than ziprasidone, quetiapine, olanzapine and risperidone, although this did not reach significance with the latter. Aripiprazole was also associated with significantly lower total healthcare costs than quetiapine, but not the other comparators.

  19. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  20. When Stepfathers Claim Stepchildren: A Conceptual Analysis

    Science.gov (United States)

    Marsiglio, William

    2004-01-01

    Abstract Guided by social constructionist and symbolic interactionist perspectives and a grounded theory method, my conceptual analysis explores stepfathers experiences with claiming stepchildren as their own. Using indepth interviews with a diverse sample of 36 stepfathers, my analysis focuses on paternal claiming as a core category and generates…

  1. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  2. Technical evaluation of methods for identifying chemotherapy-induced febrile neutropenia in healthcare claims databases

    OpenAIRE

    Weycker Derek; Sofrygin Oleg; Seefeld Kim; Deeter Robert G; Legg Jason; Edelsberg John

    2013-01-01

    Abstract Background Healthcare claims databases have been used in several studies to characterize the risk and burden of chemotherapy-induced febrile neutropenia (FN) and effectiveness of colony-stimulating factors against FN. The accuracy of methods previously used to identify FN in such databases has not been formally evaluated. Methods Data comprised linked electronic medical records from Geisinger Health System and healthcare claims data from Geisinger Health Plan. Subjects were classifie...

  3. Estimation of National Colorectal-Cancer Incidence Using Claims Databases

    International Nuclear Information System (INIS)

    Quantin, C.; Benzenine, E.; Hagi, M.; Auverlot, B.; Cottenet, J.; Binquet, M.; Compain, D.

    2012-01-01

    The aim of the study was to assess the accuracy of the colorectal-cancer incidence estimated from administrative data. Methods. We selected potential incident colorectal-cancer cases in 2004-2005 French administrative data, using two alternative algorithms. The first was based only on diagnostic and procedure codes, whereas the second considered the past history of the patient. Results of both methods were assessed against two corresponding local cancer registries, acting as “gold standards.” We then constructed a multivariable regression model to estimate the corrected total number of incident colorectal-cancer cases from the whole national administrative database. Results. The first algorithm provided an estimated local incidence very close to that given by the regional registries (646 versus 645 incident cases) and had good sensitivity and positive predictive values (about 75% for both). The second algorithm overestimated the incidence by about 50% and had a poor positive predictive value of about 60%. The estimation of national incidence obtained by the first algorithm differed from that observed in 14 registries by only 2.34%. Conclusion. This study shows the usefulness of administrative databases for countries with no national cancer registry and suggests a method for correcting the estimates provided by these data.

  4. The first report of Japanese antimicrobial use measured by national database based on health insurance claims data (2011-2013): comparison with sales data, and trend analysis stratified by antimicrobial category and age group.

    Science.gov (United States)

    Yamasaki, Daisuke; Tanabe, Masaki; Muraki, Yuichi; Kato, Genta; Ohmagari, Norio; Yagi, Tetsuya

    2018-04-01

    Our objective was to evaluate the utility of the national database (NDB) based on health insurance claims data for antimicrobial use (AMU) surveillance in medical institutions in Japan. The population-weighted total AMU expressed as defined daily doses (DDDs) per 1000 inhabitants per day (DID) was measured by the NDB. The data were compared with our previous study measured by the sales data. Trend analysis of DID from 2011 to 2013 and subgroup analysis stratified by antimicrobial category and age group were performed. There was a significant linear correlation between the AMUs measured by the sales data and the NDB. Total oral and parenteral AMUs (expressed in DID) were 1.04-fold from 12.654 in 2011 to 13.202 in 2013 and 1.13-fold from 0.734 to 0.829, respectively. Percentage of oral form among total AMU was high with more than 94% during the study period. AMU in the children group (0-14 years) decreased from 2011 to 2013 regardless of dosage form, although the working age group (15-64 years) and elderly group (65 and above years) increased. Oral AMU in the working age group was approximately two-thirds of those in the other age groups. In contrast, parenteral AMU in the elderly group was extremely high compared to the other age groups. The trend of AMU stratified by antimicrobial category and age group were successfully measured using the NDB, which can be a tool to monitor outcome indices for the national action plan on antimicrobial resistance.

  5. Technical evaluation of methods for identifying chemotherapy-induced febrile neutropenia in healthcare claims databases.

    Science.gov (United States)

    Weycker, Derek; Sofrygin, Oleg; Seefeld, Kim; Deeter, Robert G; Legg, Jason; Edelsberg, John

    2013-02-13

    Healthcare claims databases have been used in several studies to characterize the risk and burden of chemotherapy-induced febrile neutropenia (FN) and effectiveness of colony-stimulating factors against FN. The accuracy of methods previously used to identify FN in such databases has not been formally evaluated. Data comprised linked electronic medical records from Geisinger Health System and healthcare claims data from Geisinger Health Plan. Subjects were classified into subgroups based on whether or not they were hospitalized for FN per the presumptive "gold standard" (ANC based definition (diagnosis codes for neutropenia, fever, and/or infection). Accuracy was evaluated principally based on positive predictive value (PPV) and sensitivity. Among 357 study subjects, 82 (23%) met the gold standard for hospitalized FN. For the claims-based definition including diagnosis codes for neutropenia plus fever in any position (n=28), PPV was 100% and sensitivity was 34% (95% CI: 24-45). For the definition including neutropenia in the primary position (n=54), PPV was 87% (78-95) and sensitivity was 57% (46-68). For the definition including neutropenia in any position (n=71), PPV was 77% (68-87) and sensitivity was 67% (56-77). Patients hospitalized for chemotherapy-induced FN can be identified in healthcare claims databases--with an acceptable level of mis-classification--using diagnosis codes for neutropenia, or neutropenia plus fever.

  6. ADA perceived disability claims: a decision-tree analysis.

    Science.gov (United States)

    Draper, William R; Hawley, Carolyn E; McMahon, Brian T; Reid, Christine A; Barbir, Lara A

    2014-06-01

    The purpose of this study is to examine the possible interactions of predictor variables pertaining to perceived disability claims contained in a large governmental database. Specifically, it is a retrospective analysis of US Equal Employment Opportunity Commission (EEOC) data for the entire population of workplace discrimination claims based on the "regarded as disabled" prong of the Americans with Disabilities Act (ADA) definition of disability. The study utilized records extracted from a "master database" of over two million charges of workplace discrimination in the Integrated Mission System of the EEOC. This database includes all ADA-related discrimination allegations filed from July 26, 1992 through December 31, 2008. Chi squared automatic interaction detection (CHAID) was employed to analyze interaction effects of relevant variables, such as issue (grievance) and industry type. The research question addressed by CHAID is: What combination of factors are associated with merit outcomes for people making ADA EEOC allegations who are "regarded as" having disabilities? The CHAID analysis shows how merit outcome is predicted by the interaction of relevant variables. Issue was found to be the most prominent variable in determining merit outcome, followed by industry type, but the picture is made more complex by qualifications regarding age and race data. Although discharge was the most frequent grievance among charging parties in the perceived disability group, its merit outcome was significantly less than that for the leading factor of hiring.

  7. Risk of Peripheral Artery Occlusive Disease in Patients with Vertigo, Tinnitus, or Sudden Deafness: A Secondary Case-Control Analysis of a Nationwide, Population-Based Health Claims Database.

    Science.gov (United States)

    Koo, Malcolm; Chen, Jin-Cherng; Hwang, Juen-Haur

    2016-01-01

    Cochleovestibular symptoms, such as vertigo, tinnitus, and sudden deafness, are common manifestations of microvascular diseases. However, it is unclear whether these symptoms occurred preceding the diagnosis of peripheral artery occlusive disease (PAOD). Therefore, the aim of this case-control study was to investigate the risk of PAOD among patients with vertigo, tinnitus, and sudden deafness using a nationwide, population-based health claim database in Taiwan. We identified 5,340 adult patients with PAOD diagnosed between January 1, 2006 and December 31, 2010 and 16,020 controls, frequency matched on age interval, sex, and year of index date, from the Taiwan National Health Insurance Research Database. Risks of PAOD in patients with vertigo, tinnitus, or sudden deafness were separately evaluated with multivariate logistic regression analyses. Of the 5,340 patients with PAOD, 12.7%, 6.7%, and 0.3% were diagnosed with vertigo, tinnitus, and sudden deafness, respectively. In the controls, 10.6%, 6.1%, and 0.3% were diagnosed with vertigo (P vertigo (adjusted odds ratio = 1.12, P = 0.027) but not in those with tinnitus or sudden deafness. A modest increase in the risk of PAOD was observed among Taiwanese patients with vertigo, after adjustment for comorbidities.

  8. Technical evaluation of methods for identifying chemotherapy-induced febrile neutropenia in healthcare claims databases

    Directory of Open Access Journals (Sweden)

    Weycker Derek

    2013-02-01

    Full Text Available Abstract Background Healthcare claims databases have been used in several studies to characterize the risk and burden of chemotherapy-induced febrile neutropenia (FN and effectiveness of colony-stimulating factors against FN. The accuracy of methods previously used to identify FN in such databases has not been formally evaluated. Methods Data comprised linked electronic medical records from Geisinger Health System and healthcare claims data from Geisinger Health Plan. Subjects were classified into subgroups based on whether or not they were hospitalized for FN per the presumptive “gold standard” (ANC 9/L, and body temperature ≥38.3°C or receipt of antibiotics and claims-based definition (diagnosis codes for neutropenia, fever, and/or infection. Accuracy was evaluated principally based on positive predictive value (PPV and sensitivity. Results Among 357 study subjects, 82 (23% met the gold standard for hospitalized FN. For the claims-based definition including diagnosis codes for neutropenia plus fever in any position (n=28, PPV was 100% and sensitivity was 34% (95% CI: 24–45. For the definition including neutropenia in the primary position (n=54, PPV was 87% (78–95 and sensitivity was 57% (46–68. For the definition including neutropenia in any position (n=71, PPV was 77% (68–87 and sensitivity was 67% (56–77. Conclusions Patients hospitalized for chemotherapy-induced FN can be identified in healthcare claims databases--with an acceptable level of mis-classification--using diagnosis codes for neutropenia, or neutropenia plus fever.

  9. Warranty claim analysis considering human factors

    International Nuclear Information System (INIS)

    Wu Shaomin

    2011-01-01

    Warranty claims are not always due to product failures. They can also be caused by two types of human factors. On the one hand, consumers might claim warranty due to misuse and/or failures caused by various human factors. Such claims might account for more than 10% of all reported claims. On the other hand, consumers might not be bothered to claim warranty for failed items that are still under warranty, or they may claim warranty after they have experienced several intermittent failures. These two types of human factors can affect warranty claim costs. However, research in this area has received rather little attention. In this paper, we propose three models to estimate the expected warranty cost when the two types of human factors are included. We consider two types of failures: intermittent and fatal failures, which might result in different claim patterns. Consumers might report claims after a fatal failure has occurred, and upon intermittent failures they might report claims after a number of failures have occurred. Numerical examples are given to validate the results derived.

  10. Comparison of medical costs and healthcare resource utilization of post-menopausal women with HR+/HER2- metastatic breast cancer receiving everolimus-based therapy or chemotherapy: a retrospective claims database analysis.

    Science.gov (United States)

    Li, Nanxin; Hao, Yanni; Koo, Valerie; Fang, Anna; Peeples, Miranda; Kageleiry, Andrew; Wu, Eric Q; Guérin, Annie

    2016-01-01

    To analyze medical costs and healthcare resource utilization (HRU) associated with everolimus-based therapy or chemotherapy among post-menopausal women with hormone-receptor-positive, human-epidermal-growth-factor-receptor-2-negative (HR+/HER2-) metastatic breast cancer (mBC). Patients with HR+/HER2- mBC who discontinued a non-steroidal aromatase inhibitor and began a new line of treatment with everolimus-based therapy or chemotherapy (index therapy/index date) between July 20, 2012 and April 30, 2014 were identified from two large claims databases. All-cause, BC-related, and adverse event (AE)-related medical costs (in 2014 USD) and all-cause HRU per patient per month (PPPM) were analyzed for both treatment groups across patients' first four lines of therapies for mBC. Adjusted differences in costs and HRU between the everolimus and chemotherapy treatment group were estimated pooling all lines and using multivariable generalized linear models, accounting for difference in patient characteristics. A total of 3298 patients were included: 902 everolimus-treated patients and 2636 chemotherapy-treated patients. Compared to chemotherapy, everolimus was associated with significantly lower all-cause (adjusted mean difference = $3455, p well as significantly lower HRU (emergency room incidence rate ratio [IRR] = 0.83; inpatient IRR = 0.74; inpatient days IRR = 0.65; outpatient IRR = 0.71; BC-related outpatient IRR = 0.57; all p chemotherapy.

  11. A Comparative Analysis of Outstanding Claim Reserves

    Directory of Open Access Journals (Sweden)

    Zlata Djuric

    2017-12-01

    Full Text Available The key processes in the business of insurance companies which define the financial viability of their business activities, as the most important element, are the adequate amount of technical reserves. A qualitative assessment of the technical reserves level is the basic support to the management of the key business processes and proper strategic and financial decision-making in order to maximize the viability, profitability, competitiveness, and further development of the company. Based on the data on the operations of an insurance company, within a single line of insurance, different, in practice, most frequently used methods were applied in order to determine the deviation amplitude of the projected amounts from the actual claims. Another direction of research focuses on actuarial practice in non-life insurance companies operating in the territory of the Republic of Serbia. The comparative analysis of the obtained projection points to the fact that the chosen methods, commonly used in actuarial practice in the Republic of Serbia, should be monitored and reviewed. The results of the multidirectional research and detection of the existing problems provide a useful framework and a stimulating mechanism, as well as the guidelines to improve the operations and better positioning of insurance in the commercial and economic environment of the Republic of Serbia.

  12. Medical research using governments' health claims databases: with or without patients' consent?

    Science.gov (United States)

    Tsai, Feng-Jen; Junod, Valérie

    2018-03-01

    Taking advantage of its single-payer, universal insurance system, Taiwan has leveraged its exhaustive database of health claims data for research purposes. Researchers can apply to receive access to pseudonymized (coded) medical data about insured patients, notably their diagnoses, health status and treatments. In view of the strict safeguards implemented, the Taiwanese government considers that this research use does not require patients' consent (either in the form of an opt-in or in the form of an opt-out). A group of non-governmental organizations has challenged this view in the Taiwanese Courts, but to no avail. The present article reviews the arguments both against and in favor of patients' consent for re-use of their data in research. It concludes that offering patients an opt-out would be appropriate as it would best balance the important interests at issue.

  13. Immune epitope database analysis resource

    DEFF Research Database (Denmark)

    Kim, Yohan; Ponomarenko, Julia; Zhu, Zhanyang

    2012-01-01

    The immune epitope database analysis resource (IEDB-AR: http://tools.iedb.org) is a collection of tools for prediction and analysis of molecular targets of T- and B-cell immune responses (i.e. epitopes). Since its last publication in the NAR webserver issue in 2008, a new generation of peptide......, and the homology mapping tool was updated to enable mapping of discontinuous epitopes onto 3D structures. Furthermore, to serve a wider range of users, the number of ways in which IEDB-AR can be accessed has been expanded. Specifically, the predictive tools can be programmatically accessed using a web interface...

  14. Seasonality in acute liver injury? Findings in two health care claims databases

    Directory of Open Access Journals (Sweden)

    Weinstein RB

    2016-03-01

    Full Text Available Rachel B Weinstein, Martijn J Schuemie, Patrick B Ryan, Paul E Stang Epidemiology, Janssen Research and Development, LLC, Titusville, NJ, USA Background: Presumed seasonal use of acetaminophen-containing products for relief of cold/influenza (“flu” symptoms suggests that there might also be a corresponding seasonal pattern for acute liver injury (ALI, a known clinical consequence of acetaminophen overdose. Objective: The objective of this study was to determine whether there were any temporal patterns in hospitalizations for ALI that would correspond to assumed acetaminophen use in cold/flu season. Methods: In the period 2002–2010, monthly hospitalization rates for ALI using a variety of case definitions were calculated. Data sources included Truven MarketScan® Commercial Claims and Encounters (CCAE and Medicare Supplemental and Coordination of Benefits (MDCR databases. We performed a statistical test for seasonality of diagnoses using the periodic generalized linear model. To validate that the test can distinguish seasonal from nonseasonal patterns, we included two positive controls (ie, diagnoses of the common cold [acute nasopharyngitis] and influenza, believed to change with seasons, and two negative controls (female breast cancer and diabetes, believed to be insensitive to season. Results: A seasonal pattern was observed in monthly rates for common cold and influenza diagnoses, but this pattern was not observed for monthly rates of ALI, with or without comorbidities (cirrhosis or hepatitis, breast cancer, or diabetes. The statistical test for seasonality was significant for positive controls (P<0.001 for each diagnosis in both databases and nonsignificant for ALI and negative controls. Conclusion: No seasonal pattern was observed in the diagnosis of ALI. The positive and negative controls showed the expected patterns, strengthening the validity of the statistical and visual tests used for detecting seasonality. Keywords: acute liver

  15. The economic impact of GERD and PUD: examination of direct and indirect costs using a large integrated employer claims database.

    Science.gov (United States)

    Joish, Vijay N; Donaldson, Gary; Stockdale, William; Oderda, Gary M; Crawley, Joseph; Sasane, Rahul; Joshua-Gotlib, Sandra; Brixner, Diana I

    2005-04-01

    The objective of this study was to examine the relationship of work loss associated with gastro- the relationship of work loss associated with gastro- the relationship of work loss associated with gastro-esophageal reflux disease (GERD) and peptic ulcer disease (GERD) and peptic ulcer disease (PUD) in a large population of employed individuals in the United States (US) and quantify the individuals in the United States (US) and quantify the economic impact of these diseases to the employer. A proprietary database that contained work place absence, disability and workers' compensation data in addition to prescription drug and medical claims was used to answer the objectives. Employees with a medical claim with an ICD-9 code for GERD or PUD were identified from 1 January 1997 to 31 December 2000. A cohort of controls was identified for the same time period using the method of frequency matching on age, gender, industry type, occupational status, and employment status. Work absence rates and health care costs were compared between the groups after adjusting for demo graphic, and employment differences using analysis of covariance models. There were significantly lower (p rate of adjusted all-cause absenteeism and sickness-related absenteeism were observed between the disease groups versus the controls. In particular, controls had an average of 1.2 to 1.6 days and 0.4 to 0.6 lower all-cause and sickness-related absenteeism compared to the disease groups. The incremental economic impact projected to a hypothetical employed population was estimated to be $3441 for GERD, $1374 for PUD, and $4803 for GERD + PUD per employee per year compared to employees without these diseases. Direct medical cost and work absence in employees with GERD, PUD and GERD + PUD represent a significant burden to employees and employers.

  16. PS2-15: Coding for Obesity in a Health Plan Claims Database

    OpenAIRE

    Shainline, Michael; Carter, Shelley; Von Worley, Ann; Gunter, Margaret

    2010-01-01

    Background and Aims: The Centers for Disease Control estimated the obesity rate in New Mexico for 2008 to be 25.2%. Sources estimate the following associations between obesity and type 2 diabetes (80%); cardiovascular disease (70%); hypertension (26 %). Yet obesity is infrequently coded as a secondary diagnosis among providers submitting claims. This study examines the frequency with which obesity is documented on claims forms, the relationship between age, gender, and obesity coding, and the...

  17. The New Politics of US Health Care Prices: Institutional Reconfiguration and the Emergence of All-Payer Claims Databases.

    Science.gov (United States)

    Rocco, Philip; Kelly, Andrew S; Béland, Daniel; Kinane, Michael

    2017-02-01

    Prices are a significant driver of health care cost in the United States. Existing research on the politics of health system reform has emphasized the limited nature of policy entrepreneurs' efforts at solving the problem of rising prices through direct regulation at the state level. Yet this literature fails to account for how change agents in the states gradually reconfigured the politics of prices, forging new, transparency-based policy instruments called all-payer claims databases (APCDs), which are designed to empower consumers, purchasers, and states to make informed market and policy choices. Drawing on pragmatist institutional theory, this article shows how APCDs emerged as the dominant model for reforming health care prices. While APCD advocates faced significant institutional barriers to policy change, we show how they reconfigured existing ideas, tactical repertoires, and legal-technical infrastructures to develop a politically and technologically robust reform. Our analysis has important implications for theories of how change agents overcome structural barriers to health reform. Copyright © 2017 by Duke University Press.

  18. ClaimAssociationService

    Data.gov (United States)

    Department of Veterans Affairs — Retrieves and updates a veteranÆs claim status and claim-rating association (claim association for current rating) from the Corporate database for a claim selected...

  19. Decreasing incidence of type 2 diabetes mellitus in the United States, 2007-2012: Epidemiologic findings from a large US claims database.

    Science.gov (United States)

    Weng, Wayne; Liang, Yuanjie; Kimball, Edward S; Hobbs, Todd; Kong, Sheldon X; Sakurada, Brian; Bouchard, Jonathan

    2016-07-01

    To explore epidemiological trends in type 2 diabetes mellitus (T2D) in the US between 2007 and 2012 using a large US claims database, with a particular focus on demographics, prevalence, newly-diagnosed cases, and comorbidities. Truven Health MarketScan® Databases were used to identify patients with claims evidence of T2D in the years 2007 and 2012. Newly-diagnosed T2D was characterized by an absence of any T2D claims or related drug claims for 6months preceding the index claim. Demographic and comorbidity characteristics of the prevalent and new-onset T2D groups were compared and analyzed descriptively for trends over time. The overall prevalence of T2D remained stable from 2007 (1.24 million cases/15.07 million enrolled; 8.2%) to 2012 (2.04 million cases/24.52 million enrolled; 8.3%), while the percentage of newly-diagnosed cases fell dramatically from 2007 (152,252 cases; 1.1%) to 2012 (147,011 cases; 0.65%). The mean age of patients with prevalent T2D was similar in 2007 (60.6y) and 2012 (60.0y), while the mean age of newly-diagnosed T2D patients decreased by 3years from 2007 (57.7y) to 2012 (54.8y). Hypertension and hyperlipidemia were the most common comorbidities, evident in 50-75% of T2D patients, and increased markedly from 2007 to 2012 in both prevalent and new-onset T2D populations. Cardiovascular disease decreased slightly in prevalent (-0.9%) and new-onset (-2.8%) cases. This large US health claims database analysis suggests stabilization in prevalence and declining incidence of T2D over a recent 5-year period, a downward shift in age at T2D diagnosis, but increases in several comorbidities. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Evaluation of the treatment patterns and economic burden of dysmenorrhea in Japanese women, using a claims database

    Science.gov (United States)

    Akiyama, Sayako; Tanaka, Erika; Cristeau, Olivier; Onishi, Yoshie; Osuga, Yutaka

    2017-01-01

    Purpose This study aimed to describe treatment patterns and estimate health care resource utilization and associated costs among Japanese women with dysmenorrhea, using a claims database. Methods This was a retrospective analysis using health insurance data from the Japan Medical Data Center, assessing female patients aged 18–49 years with newly diagnosed primary or secondary dysmenorrhea. Treatment pattern analyses focused on hormonal medications, analgesics, hemostatic agents, traditional Chinese medicine (TCM), and gynecological surgeries. Data were collected on health care resource utilization and costs associated with medications, imaging procedures, and inpatient and outpatient care in both patients and matched controls. Results The analysis included 6,315 women with dysmenorrhea (3,441 primary; 2,874 secondary). The most commonly prescribed initial therapies were low-dose estrogen progestins (LEPs, 37.7%) and TCM (30.0%), with substantial differences between primary (LEPs: 27.4%, TCM: 38.8%) and secondary (LEPs: 50.2%, TCM: 19.5%) dysmenorrhea cohorts. Surgery was conducted in dysmenorrhea had significantly higher mean total health care costs compared to controls within the 1-year period following diagnosis (Case-primary: 191,680 JPY [1,916 USD]; secondary: 246,488 JPY [2,465 USD], Control-primary: 83,615 JPY [836 USD]; secondary: 90,711 JPY [907 USD]) (pdysmenorrhea cohorts, respectively, compared with matched controls, (both pdysmenorrhea patients compared to controls (pdysmenorrhea and those treated by internal medicine physicians. Total annual health care costs were approximately 2–3 times higher in patients with dysmenorrhea compared to women without the condition. PMID:28579813

  1. Analysis of clinical negligence claims following tonsillectomy in England 1995 to 2010.

    Science.gov (United States)

    Mathew, Rajeev; Asimacopoulos, Eleni; Walker, David; Gutierrez, Tatiana; Valentine, Peter; Pitkin, Lisa

    2012-05-01

    We determined the characteristics of medical negligence claims following tonsillectomy. Claims relating to tonsillectomy between 1995 and 2010 were obtained from the National Health Service Litigation Authority database. The number of open and closed claims was determined, and data were analyzed for primary injury claimed, outcome of claim, and associated costs. Over 15 years, there were 40 claims of clinical negligence related to tonsillectomy, representing 7.7% of all claims in otolaryngology. There were 34 closed claims, of which 32 (94%) resulted in payment of damages. Postoperative bleeding was the most common injury, with delayed recognition and treatment of bleeding alleged in most cases. Nasopharyngeal regurgitation as a result of soft palate fistulas or excessive tissue resection was the next-commonest cause of a claim. The other injuries claimed included dentoalveolar injury, bums, tonsillar remnants, and temporomandibular joint dysfunction. Inadequate informed consent was claimed in 5 cases. Clinical negligence claims following tonsillectomy have a high success rate. Although postoperative bleeding is the most common cause of negligence claims, a significant proportion of claims are due to rare complications of surgery. Informed consent should be tailored to the individual patient and should include a discussion of common and serious complications.

  2. Health claims database study of cyclosporine ophthalmic emulsion treatment patterns in dry eye patients

    Science.gov (United States)

    Stonecipher, Karl G; Chia, Jenny; Onyenwenyi, Ahunna; Villanueva, Linda; Hollander, David A

    2013-01-01

    Background Dry eye is a multifactorial, symptomatic disease associated with ocular surface inflammation and tear film hyperosmolarity. This study was designed to assess patterns of topical cyclosporine ophthalmic emulsion 0.05% (Restasis®) use in dry eye patients and determine if there were any differences in use based on whether dry eye is physician-coded as a primary or nonprimary diagnosis. Methods Records for adult patients with a diagnosis of dry eye at an outpatient visit from January 1, 2008 to December 31, 2009 were selected from Truven Health MarketScan® Research Databases. The primary endpoint was percentage of patients with at least one primary versus no primary dry eye diagnosis who filled a topical cyclosporine prescription. Data analyzed included utilization of topical corticosteroids, oral tetracyclines, and punctal plugs. Results The analysis included 576,416 patients, accounting for 875,692 dry eye outpatient visits: 74.7% were female, 64.2% were ages 40–69 years, and 84.4% had at least one primary dry eye diagnosis. During 2008–2009, 15.9% of dry eye patients with a primary diagnosis versus 6.5% with no primary diagnosis filled at least one cyclosporine prescription. For patients who filled at least one prescription, the mean months’ supply of cyclosporine filled over 12 months was 4.44. Overall, 33.9% of dry eye patients filled a prescription for topical cyclosporine, topical corticosteroid, or oral tetracycline over 2 years. Conclusion Patients with a primary dry eye diagnosis were more likely to fill a topical cyclosporine prescription. Although inflammation is key to the pathophysiology of dry eye, most patients seeing a physician for dry eye may not receive anti-inflammatory therapies. PMID:24179335

  3. Health claims database study of cyclosporine ophthalmic emulsion treatment patterns in dry eye patients.

    Science.gov (United States)

    Stonecipher, Karl G; Chia, Jenny; Onyenwenyi, Ahunna; Villanueva, Linda; Hollander, David A

    2013-01-01

    Dry eye is a multifactorial, symptomatic disease associated with ocular surface inflammation and tear film hyperosmolarity. This study was designed to assess patterns of topical cyclosporine ophthalmic emulsion 0.05% (Restasis®) use in dry eye patients and determine if there were any differences in use based on whether dry eye is physician-coded as a primary or nonprimary diagnosis. Records for adult patients with a diagnosis of dry eye at an outpatient visit from January 1, 2008 to December 31, 2009 were selected from Truven Health MarketScan® Research Databases. The primary endpoint was percentage of patients with at least one primary versus no primary dry eye diagnosis who filled a topical cyclosporine prescription. Data analyzed included utilization of topical corticosteroids, oral tetracyclines, and punctal plugs. The analysis included 576,416 patients, accounting for 875,692 dry eye outpatient visits: 74.7% were female, 64.2% were ages 40-69 years, and 84.4% had at least one primary dry eye diagnosis. During 2008-2009, 15.9% of dry eye patients with a primary diagnosis versus 6.5% with no primary diagnosis filled at least one cyclosporine prescription. For patients who filled at least one prescription, the mean months' supply of cyclosporine filled over 12 months was 4.44. Overall, 33.9% of dry eye patients filled a prescription for topical cyclosporine, topical corticosteroid, or oral tetracycline over 2 years. Patients with a primary dry eye diagnosis were more likely to fill a topical cyclosporine prescription. Although inflammation is key to the pathophysiology of dry eye, most patients seeing a physician for dry eye may not receive anti-inflammatory therapies.

  4. Health claims database study of cyclosporine ophthalmic emulsion treatment patterns in dry eye patients

    Directory of Open Access Journals (Sweden)

    Stonecipher KG

    2013-10-01

    Full Text Available Karl G Stonecipher,1 Jenny Chia,2 Ahunna Onyenwenyi,2 Linda Villanueva,2 David A Hollander2 1TLC Laser Eye Centers, Greensboro, NC, 2Allergan, Inc., Irvine, CA, USA Background: Dry eye is a multifactorial, symptomatic disease associated with ocular surface inflammation and tear film hyperosmolarity. This study was designed to assess patterns of topical cyclosporine ophthalmic emulsion 0.05% (Restasis® use in dry eye patients and determine if there were any differences in use based on whether dry eye is physician-coded as a primary or nonprimary diagnosis. Methods: Records for adult patients with a diagnosis of dry eye at an outpatient visit from January 1, 2008 to December 31, 2009 were selected from Truven Health MarketScan® Research Databases. The primary endpoint was percentage of patients with at least one primary versus no primary dry eye diagnosis who filled a topical cyclosporine prescription. Data analyzed included utilization of topical corticosteroids, oral tetracyclines, and punctal plugs. Results: The analysis included 576,416 patients, accounting for 875,692 dry eye outpatient visits: 74.7% were female, 64.2% were ages 40-69 years, and 84.4% had at least one primary dry eye diagnosis. During 2008–2009, 15.9% of dry eye patients with a primary diagnosis versus 6.5% with no primary diagnosis filled at least one cyclosporine prescription. For patients who filled at least one prescription, the mean months’ supply of cyclosporine filled over 12 months was 4.44. Overall, 33.9% of dry eye patients filled a prescription for topical cyclosporine, topical corticosteroid, or oral tetracycline over 2 years. Conclusion: Patients with a primary dry eye diagnosis were more likely to fill a topical cyclosporine prescription. Although inflammation is key to the pathophysiology of dry eye, most patients seeing a physician for dry eye may not receive anti-inflammatory therapies. Keywords: corticosteroids, cyclosporine, dry eye syndromes

  5. An Analysis of the Number of Medical Malpractice Claims and Their Amounts.

    Directory of Open Access Journals (Sweden)

    Marco Bonetti

    Full Text Available Starting from an extensive database, pooling 9 years of data from the top three insurance brokers in Italy, and containing 38125 reported claims due to alleged cases of medical malpractice, we use an inhomogeneous Poisson process to model the number of medical malpractice claims in Italy. The intensity of the process is allowed to vary over time, and it depends on a set of covariates, like the size of the hospital, the medical department and the complexity of the medical operations performed. We choose the combination medical department by hospital as the unit of analysis. Together with the number of claims, we also model the associated amounts paid by insurance companies, using a two-stage regression model. In particular, we use logistic regression for the probability that a claim is closed with a zero payment, whereas, conditionally on the fact that an amount is strictly positive, we make use of lognormal regression to model it as a function of several covariates. The model produces estimates and forecasts that are relevant to both insurance companies and hospitals, for quality assurance, service improvement and cost reduction.

  6. Secondary Analysis for Results Tracking Database

    Data.gov (United States)

    US Agency for International Development — The Secondary Analysis and Results Tracking (SART) activity provides support for the development of two databases to manage secondary and third-party data, data...

  7. Testing consumer perception of nutrient content claims using conjoint analysis.

    Science.gov (United States)

    Drewnowski, Adam; Moskowitz, Howard; Reisner, Michele; Krieger, Bert

    2010-05-01

    The US Food and Drug Administration (FDA) proposes to establish standardized and mandatory criteria upon which front-of-pack (FOP) nutrition labelling must be based. The present study aimed to estimate the relative contribution of declared amounts of different nutrients to the perception of the overall 'healthfulness' of foods by the consumer. Protein, fibre, vitamin A, vitamin C, calcium and iron were nutrients to encourage. Total fat, saturated fat, cholesterol, total and added sugar, and sodium were the nutrients to limit. Two content claims per nutrient used the FDA-approved language. An online consumer panel (n 320) exposed to multiple messages (n 48) rated the healthfulness of each hypothetical food product. Utility functions were constructed using conjoint analysis, based on multiple logistic regression and maximum likelihood estimation. Consumer perception of healthfulness was most strongly driven by the declared presence of protein, fibre, calcium and vitamin C and by the declared total absence of saturated fat and sodium. For this adult panel, total and added sugar had lower utilities and contributed less to the perception of healthfulness. There were major differences between women and men. Conjoint analysis can lead to a better understanding of how consumers process information about the full nutrition profile of a product, and is a powerful tool for the testing of nutrient content claims. Such studies can help the FDA develop science-based criteria for nutrient profiling that underlies FOP and shelf labelling.

  8. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  9. [The essentials of workplace analysis for examining occupational disability claims].

    Science.gov (United States)

    Wachholz, St

    2015-12-01

    The insurance branch that covers the risk of occupational disability ranks among the most important private entities for offering security as far as the limitation or loss of one's ability to work is concerned. The financial risk of the insurer, the existential concerns and expectations of the claimant, as well as the legal framework and the need for a careful interdisciplinary evaluation, necessitate a professional review and assessment of the facts conducted with a sense of both responsibility and sensitivity. Carefully deliberated and sustainable decisions benefit both insurers and the insured. In order to achieve this, an opinion is required in many--and especially the more complex--cases from an external medical expert, which in turn can only be plausible and conclusive when based on a comprehensive review of the claimant's working environment and its particular (and often unique) requirements. This article is intended to increase the reader's understanding of the coherencies of workplace analysis and medical assessments, as required by insurance law and legislation. In addition, the article delivers valuable clues and guidance, both for medical experts and claims managers at insurance companies. Primarily, the claimant's occupation, as conceived in the terms and conditions of the insurance companies, is explained. The reader is then introduced to the various criteria to be considered when a claimant has several jobs at the same time, is self-employed, could be transferred to another job, is simply unable to commute to the workplace, or is prevented from working due to legal restrictions related to an illness. The article goes on to address the crucial aspect of how the degree of disability is to be measured under different circumstances, namely using the quantitative and the qualitative approach. As a reliable method for obtaining the essential data regarding the claimant's specific working conditions, which are required by both the medical expert and the

  10. Treatment patterns in hyperlipidaemia patients based on administrative claim databases in Japan.

    Science.gov (United States)

    Wake, Mayumi; Onishi, Yoshie; Guelfucci, Florent; Oh, Akinori; Hiroi, Shinzo; Shimasaki, Yukio; Teramoto, Tamio

    2018-05-01

    Real-world evidence on treatment of hyperlipidaemia (HLD) in Japan is limited. We aimed to describe treatment patterns, persistence with, and adherence to treatment in Japanese patients with HLD. Retrospective analyses of adult HLD patients receiving drug therapy in 2014-2015 were conducted using the Japan Medical Data Center (JMDC) and Medical Data Vision (MDV) databases. Depending on their HLD treatment history, individuals were categorised as untreated (UT) or previously treated (PT), and were followed for at least 12 months. Outcomes of interest included prescribing patterns of HLD drug classes, persistence with treatment at 12 months, and adherence to treatment. Data for 49,582 and 53,865 patients from the JMDC and MDV databases, respectively, were analysed. First-line HLD prescriptions for UT patients were predominantly for moderate statins (JMDC: 75.9%, MDV: 77.0%). PT patients most commonly received combination therapy (JMDC: 43.9%, MDV: 52.6%). Approximately half of the UT patients discontinued treatment during observation. Within each cohort, persistence rates were lower in UT patients than in PT patients (JMDC: 45.0% vs. 77.5%; MDV: 51.9% vs. 85.3%). Adherence was ≥80% across almost all HLD drug classes, and was slightly lower in the JMDC cohort than MDV cohort. Most common prescriptions were moderate statins in UT patients and combination therapy in PT patients. The high discontinuation rate of HLD therapy in UT patients warrants further investigation and identification of methods to encourage and support long-term persistence. Copyright © 2018. Published by Elsevier B.V.

  11. Analysis of 11 years of clinical negligence claims in esophagogastric cancer in England.

    Science.gov (United States)

    Ratnasingham, K; Stroud, L; Knight, J; Preston, S R; Sultan, J

    2017-04-01

    In the National Health Service (NHS), clinical negligence claims and associated compensations are constantly rising. The aim of this study is to identify the size, trends, and causes of litigations claims in relation to esophagogastric (EG) cancer in the NHS. Data requests were submitted to the NHS Litigation Authority (NHSLA) for the period of January 2003 to December 2013. Data were reviewed, categorized clinically, and analyzed in terms of causes and costs behind claims. In this time period, there were 163 claims identified from the NHSLA database. Ninety-five (58.3%) claims were successful with a pay out of £6.25 million. An increasing overall claim frequency and success rate were found over the last few years. Majority of the claims were from gastric cancer 84 (88.4%). The commonest cause of complaint in successful claims was delay or failure in diagnosis (21.1%) and treatment (17.9%). There were only 10.5% successful intraoperative claims, of which 50% were due to unnecessary or additional procedures. The frequency and success rates of malpractice claims in EG cancer are rising. The failure or delay in diagnosing and treatment in EG malignancy are the common cause for successful litigation claims. The findings further reinforce the need to improve early diagnosis. © The Authors 2017. Published by Oxford University Press on behalf of International Society for Diseases of the Esophagus. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Cervical spinal cord, root, and bony spine injuries: a closed claims analysis.

    Science.gov (United States)

    Hindman, Bradley J; Palecek, John P; Posner, Karen L; Traynelis, Vincent C; Lee, Lorri A; Sawin, Paul D; Tredway, Trent L; Todd, Michael M; Domino, Karen B

    2011-04-01

    The aim of this study was to characterize cervical cord, root, and bony spine claims in the American Society of Anesthesiologists Closed Claims database to formulate hypotheses regarding mechanisms of injury. All general anesthesia claims (1970-2007) in the Closed Claims database were searched to identify cervical injuries. Three independent teams, each consisting of an anesthesiologist and neurosurgeon, used a standardized review form to extract data from claim summaries and judge probable contributors to injury. Cervical injury claims (n = 48; mean ± SD age 47 ± 15 yr; 73% male) comprised less than 1% of all general anesthesia claims. When compared with other general anesthesia claims (19%), cervical injury claims were more often permanent and disabling (69%; P cervical stenosis) were often present, cord injuries usually occurred in the absence of traumatic injury (81%) or cervical spine instability (76%). Cord injury occurred with cervical spine (65%) and noncervical spine (35%) procedures. Twenty-four percent of cord injuries were associated with the sitting position. Probable contributors to cord injury included anatomic abnormalities (81%), direct surgical complications (24% [38%, cervical spine procedures]), preprocedural symptomatic cord injury (19%), intraoperative head/neck position (19%), and airway management (11%). Most cervical cord injuries occurred in the absence of traumatic injury, instability, and airway difficulties. Cervical spine procedures and/or sitting procedures appear to predominate. In the absence of instability, cervical spondylosis was the most common factor associated with cord injury.

  13. Analysis of large databases in vascular surgery.

    Science.gov (United States)

    Nguyen, Louis L; Barshes, Neal R

    2010-09-01

    Large databases can be a rich source of clinical and administrative information on broad populations. These datasets are characterized by demographic and clinical data for over 1000 patients from multiple institutions. Since they are often collected and funded for other purposes, their use for secondary analysis increases their utility at relatively low costs. Advantages of large databases as a source include the very large numbers of available patients and their related medical information. Disadvantages include lack of detailed clinical information and absence of causal descriptions. Researchers working with large databases should also be mindful of data structure design and inherent limitations to large databases, such as treatment bias and systemic sampling errors. Withstanding these limitations, several important studies have been published in vascular care using large databases. They represent timely, "real-world" analyses of questions that may be too difficult or costly to address using prospective randomized methods. Large databases will be an increasingly important analytical resource as we focus on improving national health care efficacy in the setting of limited resources.

  14. An anomaly analysis framework for database systems

    NARCIS (Netherlands)

    Vavilis, S.; Egner, A.I.; Petkovic, M.; Zannone, N.

    2015-01-01

    Anomaly detection systems are usually employed to monitor database activities in order to detect security incidents. These systems raise an alert when anomalous activities are detected. The raised alerts have to be analyzed to timely respond to the security incidents. Their analysis, however, is

  15. Gender Disparities in Ghana National Health Insurance Claims: An Econometric Analysis

    Directory of Open Access Journals (Sweden)

    Samuel Antwi

    2014-01-01

    Full Text Available The objective of this study was to find out the gender disparities in Ghana national health insurance claims. In this work, data was collected from the policyholders of the Ghana National Health Insurance Scheme with the help of the National Health Insurance database and the patients’ attendance register of the Koforidua Regional Hospital, from 1st January to 31st December 2011. The generalized linear regression (GLR models and the SPSS version 17.0 were used for the analysis. Among men, the younger people prefer attending hospital for treatment as compared to their adult counterparts. In contrast to women, younger women favor attending hospital for treatment as compared to their adult counterparts. Among men, various levels of income impact greatly on their propensity to make an insurance claim, whereas among women only the highest income level did as compared to lowest income level.Men, who completed senior high school education, were less likely to make an insurance claim as compared to their counterparts with basic or no education. However it was women who had basic education that preferred using the hospital as compared to their more educated counterparts. It is suggested that the government should consider building more health centers, clinics and cheap-compounds in at least every community, to help reduce the travel time in accessing health care.  The ministry of health and the Ghana health service should engage older citizens by encouraging them to use hospitals when they are sick instead of other alternative care providers.

  16. Edge database analysis for extrapolation to ITER

    International Nuclear Information System (INIS)

    Shimada, M.; Janeschitz, G.; Stambaugh, R.D.

    1999-01-01

    An edge database has been archived to facilitate cross-machine comparisons of SOL and edge pedestal characteristics, and to enable comparison with theoretical models with an aim to extrapolate to ITER. The SOL decay lengths of power, density and temperature become broader for increasing density and q 95 . The power decay length is predicted to be 1.4-3.5 cm (L-mode) and 1.4-2.7 cm (H-mode) at the midplane in ITER. Analysis of Type I ELMs suggests that each giant ELM on ITER would exceed the ablation threshold of the divertor plates. Theoretical models are proposed for the H-mode transition, for Type I and Type III ELMs and are compared with the edge pedestal database. (author)

  17. Health and nutrition content claims on websites advertising infant formula available in Australia: A content analysis.

    Science.gov (United States)

    Berry, Nina J; Gribble, Karleen D

    2017-10-01

    The use of health and nutrition content claims in infant formula advertising is restricted by many governments in response to WHO policies and WHA resolutions. The purpose of this study was to determine whether such prohibited claims could be observed in Australian websites that advertise infant formula products. A comprehensive internet search was conducted to identify websites that advertise infant formula available for purchase in Australia. Content analysis was used to identify prohibited claims. The coding frame was closely aligned with the provisions of the Australian and New Zealand Food Standard Code, which prohibits these claims. The outcome measures were the presence of health claims, nutrition content claims, or references to the nutritional content of human milk. Web pages advertising 25 unique infant formula products available for purchase in Australia were identified. Every advertisement (100%) contained at least one health claim. Eighteen (72%) also contained at least one nutrition content claim. Three web pages (12%) advertising brands associated with infant formula products referenced the nutritional content of human milk. All of these claims appear in spite of national regulations prohibiting them indicating a failure of monitoring and/or enforcement. Where countries have enacted instruments to prohibit health and other claims in infant formula advertising, the marketing of infant formula must be actively monitored to be effective. © 2016 John Wiley & Sons Ltd.

  18. Online E-cigarette Marketing Claims: A Systematic Content and Legal Analysis.

    Science.gov (United States)

    Klein, Elizabeth G; Berman, Micah; Hemmerich, Natalie; Carlson, Cristen; Htut, SuSandi; Slater, Michael

    2016-07-01

    Electronic nicotine delivery systems (ENDS), or e-cigarettes, are heavily marketed online. The purpose of our study was to perform a systematic identification and evaluation of claims made within ENDS retailer and manufacturer websites, and the legal status of such claims. We employed a systematic search protocol with popular search engines using 6 terms: (1) e-cigarettes; (2) e-cigs; (3) e-juice; (4) e-liquid; (5) e-hookah; and (6) vape pen. We analyzed English-language websites where ENDS are sold for implicit and explicit health-related claims. A legal analysis determined whether such claims are permissible under the US Food and Drug Administration's regulations. The vast majority of ENDS manufacturer (N = 78) and retailer (N = 32) websites made at least one health-related claim (77% and 65%, respectively). Modified risk claims and secondhand smoke-related claims were most prevalent, with an average of 2 claims per site. Health-related claims are plentiful within ENDS manufacturer and retailer websites. Results demonstrate that these sites focus on potential benefits while minimizing or eliminating information about possible harmful effects of ENDS. These claims are subject to the current regulatory authority by the FDA, and pose a risk of misinforming consumers.

  19. Effectiveness of influenza vaccination for children in Japan: Four-year observational study using a large-scale claims database.

    Science.gov (United States)

    Shibata, Natsumi; Kimura, Shinya; Hoshino, Takahiro; Takeuchi, Masato; Urushihara, Hisashi

    2018-05-11

    To date, few large-scale comparative effectiveness studies of influenza vaccination have been conducted in Japan, since marketing authorization for influenza vaccines in Japan has been granted based only on the results of seroconversion and safety in small-sized populations in clinical trial phases not on the vaccine effectiveness. We evaluated the clinical effectiveness of influenza vaccination for children aged 1-15 years in Japan throughout four influenza seasons from 2010 to 2014 in the real world setting. We conducted a cohort study using a large-scale claims database for employee health care insurance plans covering more than 3 million people, including enrollees and their dependents. Vaccination status was identified using plan records for the influenza vaccination subsidies. The effectiveness of influenza vaccination in preventing influenza and its complications was evaluated. To control confounding related to influenza vaccination, odds ratios (OR) were calculated by applying a doubly robust method using the propensity score for vaccination. Total study population throughout the four consecutive influenza seasons was over 116,000. Vaccination rate was higher in younger children and in the recent influenza seasons. Throughout the four seasons, the estimated ORs for influenza onset were statistically significant and ranged from 0.797 to 0.894 after doubly robust adjustment. On age stratification, significant ORs were observed in younger children. Additionally, ORs for influenza complication outcomes, such as pneumonia, hospitalization with influenza and respiratory tract diseases, were significantly reduced, except for hospitalization with influenza in the 2010/2011 and 2012/2013 seasons. We confirmed the clinical effectiveness of influenza vaccination in children aged 1-15 years from the 2010/2011 to 2013/2014 influenza seasons. Influenza vaccine significantly prevented the onset of influenza and was effective in reducing its secondary complications

  20. [Approaching the "evidence-practice gap" in pharmaceutical risk management: analysis of healthcare claim data].

    Science.gov (United States)

    Nakayama, Takeo

    2012-01-01

    The concept of evidence-based medicine (EBM) has promulgated among healthcare professionals in recent years, on the other hand, the problem of underuse of useful clinical evidence is coming to be important. This is called as evidence-practice gap. The major concern about evidence-practice gap is insufficient implementation of evidence-based effective treatment, however, the perspective can be extended to measures to improve drug safety and prevention of drug related adverse events. First, this article reviews the characteristics of the database of receipt (healthcare claims) and the usefulness for research purpose of pharmacoepidemiology. Second, as the real example of the study on evidence-practice gap by using the receipt database, the case of ergot-derived anti-Parkinson drugs, of which risk of valvulopathy has been identified, is introduced. The receipt analysis showed that more than 70% of Parkinson's disease patients prescribed with cabergoline or pergolide did not undergo echocardiography despite the revision of the product label recommendation. Afterwards, the issues of pharmaceutical risk management and risk communication will be discussed.

  1. Content analysis of false and misleading claims in television advertising for prescription and nonprescription drugs.

    Science.gov (United States)

    Faerber, Adrienne E; Kreling, David H

    2014-01-01

    False and misleading advertising for drugs can harm consumers and the healthcare system, and previous research has demonstrated that physician-targeted drug advertisements may be misleading. However, there is a dearth of research comparing consumer-targeted drug advertising to evidence to evaluate whether misleading or false information is being presented in these ads. To compare claims in consumer-targeted television drug advertising to evidence, in order to evaluate the frequency of false or misleading television drug advertising targeted to consumers. A content analysis of a cross-section of television advertisements for prescription and nonprescription drugs aired from 2008 through 2010. We analyzed commercial segments containing prescription and nonprescription drug advertisements randomly selected from the Vanderbilt Television News Archive, a census of national news broadcasts. For each advertisement, the most-emphasized claim in each ad was identified based on claim iteration, mode of communication, duration and placement. This claim was then compared to evidence by trained coders, and categorized as being objectively true, potentially misleading, or false. Potentially misleading claims omitted important information, exaggerated information, made lifestyle associations, or expressed opinions. False claims were factually false or unsubstantiated. Of the most emphasized claims in prescription (n = 84) and nonprescription (n = 84) drug advertisements, 33 % were objectively true, 57 % were potentially misleading and 10 % were false. In prescription drug ads, there were more objectively true claims (43 %) and fewer false claims (2 %) than in nonprescription drug ads (23 % objectively true, 7 % false). There were similar numbers of potentially misleading claims in prescription (55 %) and nonprescription (61 %) drug ads. Potentially misleading claims are prevalent throughout consumer-targeted prescription and nonprescription drug advertising on

  2. Petroleum taxation under uncertainty: contingent claims analysis with an application to Norway

    International Nuclear Information System (INIS)

    Lund, D.

    1992-01-01

    Contingent claims analysis provides a useful tool for analysing the value of tax claims, and of companies' after-tax values, under uncertainty. The method is presented and applied to the analysis of how Norwegian petroleum taxes affect company behaviour. Few results can be derived analytically. A numerical approach is suggested, with a stylized description of production possibilities. The Norwegian taxes are found to have strong distortionary effects. The relation to other methods and problems connected with the application are discussed. (Author)

  3. The cost of respirable coal mine dust: an analysis based on new black lung claims

    Energy Technology Data Exchange (ETDEWEB)

    Page, S.J.; Organiscak, J.A.; Lichtman, K. [US Bureau of Mines, Pittsburgh, PA (United States). Dept. of the Interior

    1997-12-01

    The article provides summation of the monetary costs of new compensation claims associated with levels of unmitigated respirable coal mine dust and the resultant lung disease known as black lung and compares these compensation costs to the cost of dust control technology research by the US Bureau of Mines. It presents an analysis of these expenditures and projects these costs over the period from 1991 to 2010, based on projected future new claims which are assumed to be approved for federal and state benefit payment. Since current and future dust control research efforts cannot change past claim histories, a valid comparison of future research spending with other incurred costs must examine only the cost of future new claims. The bias of old claim costs was eliminated in this analysis by examining only claims since 1980. The results estimate that for an expected 339 new approved claims annually from 1991 to 2010, the Federal Trust Fund costs will be 985 million dollars. During this same period, state black lung compensation is estimated to be 18.2 billion dollars. The Bureau of Mines dust control research expenditures are estimated as 0.44% of the projected future black lung-related costs. 9 refs., 4 figs., 3 tabs.

  4. Characteristics and healthcare utilisation patterns of high-cost beneficiaries in the Netherlands: a cross-sectional claims database study

    NARCIS (Netherlands)

    Wammes, J.J.G.; Tanke, M.A.C.; Jonkers, W.; Westert, G.P.; Wees, P.J. van der; Jeurissen, P.P.T.

    2017-01-01

    OBJECTIVE: To determine medical needs, demographic characteristics and healthcare utilisation patterns of the top 1% and top 2%-5% high-cost beneficiaries in the Netherlands. DESIGN: Cross-sectional study using 1 year claims data. We broke down high-cost beneficiaries by demographics, the most

  5. Analysis of the presence of nutrient claims on labels of ultra-processed foods directed at children and of the perception of kids on such claims

    Directory of Open Access Journals (Sweden)

    Natália Durigon ZUCCHI

    Full Text Available ABSTRACT Objective: To characterize the presence of nutrient claims on the front-of-pack labels of ultra-processed foods directed at children and gain insight on children' views about the presence of marketing strategies and nutrient claims on labels of ultra-processed foods. Methods: Analysis of images (front panel, nutrition facts table, and ingredients list of labels from 535 packaged foods with marketing strategies directed at children obtained in an audit-type survey conducted at a Brazilian large supermarket store. Food products with ultra-processed characteristics were identified, and the nutrient claims were quantified and described. Focus groups were conducted with children aged 8-10 years. Results: A total of 472 (88.0% of the 535 packaged foods directed at children were classified as ultra-processed. Of these, 220 (46.6% had one or more nutrient claims on their front-of-pack label (n=321, most (n=236, 73.5% claiming the presence/increased quantities of vitamins and minerals. The most common 'free/reduced' content claim regarded trans fat content (n=48. The focus groups allowed the identification of a noticeable influence of nutrition claims on children, who considered the emphasis important but were confused by the meaning and focus of such claims. Conclusion: Highlighted nutrient claims on the packages of ultra-processed foods were common and seemed to influence the children's perception of the products' quality as a whole. The results indicate the need of thoroughly reviewing the legislation on nutrient claims on the packages of ultra-processed foods.

  6. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  7. Analysis of medication-related malpractice claims: causes, preventability, and costs.

    Science.gov (United States)

    Rothschild, Jeffrey M; Federico, Frank A; Gandhi, Tejal K; Kaushal, Rainu; Williams, Deborah H; Bates, David W

    2002-11-25

    Adverse drug events (ADEs) may lead to serious injury and may result in malpractice claims. While ADEs resulting in claims are not representative of all ADEs, such data provide a useful resource for studying ADEs. Therefore, we conducted a review of medication-related malpractice claims to study their frequency, nature, and costs and to assess the human factor failures associated with preventable ADEs. We also assessed the potential benefits of proved effective ADE prevention strategies on ADE claims prevention. We conducted a retrospective analysis of a New England malpractice insurance company claims records from January 1, 1990, to December 31, 1999. Cases were electronically screened for possible ADEs and followed up by independent review of abstracts by 2 physician reviewers (T.K.G. and R.K.). Additional in-depth claims file reviews identified potential human factor failures associated with ADEs. Adverse drug events represented 6.3% (129/2040) of claims. Adverse drug events were judged preventable in 73% (n = 94) of the cases and were nearly evenly divided between outpatient and inpatient settings. The most frequently involved medication classes were antibiotics, antidepressants or antipsychotics, cardiovascular drugs, and anticoagulants. Among these ADEs, 46% were life threatening or fatal. System deficiencies and performance errors were the most frequent cause of preventable ADEs. The mean costs of defending malpractice claims due to ADEs were comparable for nonpreventable inpatient and outpatient ADEs and preventable outpatient ADEs (mean, $64,700-74,200), but costs were considerably greater for preventable inpatient ADEs (mean, $376,500). Adverse drug events associated with malpractice claims were often severe, costly, and preventable, and about half occurred in outpatients. Many interventions could potentially have prevented ADEs, with error proofing and process standardization covering the greatest proportion of events.

  8. Prevalence of and risk for gastrointestinal bleeding and peptic ulcerative disorders in a cohort of HIV patients from a U.S. healthcare claims database.

    Directory of Open Access Journals (Sweden)

    Emily Bratton

    Full Text Available The primary study objectives were to estimate the frequencies and rates of gastrointestinal bleeding and peptic ulcerative disorder in HIV-positive patients compared with age- and sex-matched HIV-negative subjects. Data from a US insurance claims database was used for this analysis. Among 89,207 patients with HIV, 9.0% had a GI bleed, 1.0% had an upper gastrointestinal bleed, 5.6% had a lower gastrointestinal bleed, 1.9% had a peptic ulcerative disorder diagnosis, and 0.6% had both gastrointestinal/peptic ulcerative disorder. Among 267,615 HIV-negative subjects, the respective frequencies were 6.9%, 0.6%, 4.3%, 1.4%, and 0.4% (p<0.0001 for each diagnosis subcategory. After combining effect measure modifiers into comedication and comorbidity strata, gastrointestinal bleeding hazard ratios (HRs were higher for HIV-positive patients without comedication/comorbidity, and those with comedication alone (HR, 2.73; 95% confidence interval [CI], 2.62-2.84; HR, 1.59; 95% CI, 1.47-1.71. The rate of peptic ulcerative disorder among those without a history of ulcers and no comorbidity/comedication was also elevated (HR, 2.72; 95% CI, 2.48-2.99. Hazard ratios of gastrointestinal bleeding, and peptic ulcerative disorder without a history of ulcers were lower among patients infected with HIV with comedication/comorbidity (HR, 0.64; 95% CI, 0.56-0.73; HR, 0.46; 95% CI, 0.33-0.65. Rates of gastrointestinal bleeding plus peptic ulcerative disorder followed a similar pattern. In summary, the rates of gastrointestinal/peptic ulcerative disorder events comparing HIV-infected subjects to non-HIV-infected subjects were differential based on comorbidity and comedication status.

  9. Premium analysis for copula model: A case study for Malaysian motor insurance claims

    Science.gov (United States)

    Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah

    2014-06-01

    This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.

  10. Analysis of Cloud-Based Database Systems

    Science.gov (United States)

    2015-06-01

    deploying the VM, we installed SQL Server 2014 relational database management software (RDBMS) and restored a copy of the PYTHON database onto the server ...management views within SQL Server , we retrieved lists of the most commonly executed queries, the percentage of reads versus writes, as well as...Monitor. This gave us data regarding resource utilization and queueing. The second tool we used was the SQL Server Profiler provided by Microsoft

  11. Analysis of commercial and public bioactivity databases.

    Science.gov (United States)

    Tiikkainen, Pekka; Franke, Lutz

    2012-02-27

    Activity data for small molecules are invaluable in chemoinformatics. Various bioactivity databases exist containing detailed information of target proteins and quantitative binding data for small molecules extracted from journals and patents. In the current work, we have merged several public and commercial bioactivity databases into one bioactivity metabase. The molecular presentation, target information, and activity data of the vendor databases were standardized. The main motivation of the work was to create a single relational database which allows fast and simple data retrieval by in-house scientists. Second, we wanted to know the amount of overlap between databases by commercial and public vendors to see whether the former contain data complementing the latter. Third, we quantified the degree of inconsistency between data sources by comparing data points derived from the same scientific article cited by more than one vendor. We found that each data source contains unique data which is due to different scientific articles cited by the vendors. When comparing data derived from the same article we found that inconsistencies between the vendors are common. In conclusion, using databases of different vendors is still useful since the data overlap is not complete. It should be noted that this can be partially explained by the inconsistencies and errors in the source data.

  12. DIII-D physics analysis database

    International Nuclear Information System (INIS)

    Bramson, G.; Schissel, D.P.; DeBoo, J.C.; St John, H.

    1990-10-01

    Since June 1986 the DIII-D tokamak has had over 16000 discharges accumulating more than 250 Gigabytes of raw data (currently over 30 Mbytes per discharge). The centralized DIII-D databases and the associated support software described earlier provide the means to extract, analyze, store, and display reduced sets of data for specific physics issues. The confinement, stability, transition, and cleanliness databases consist of more than 7500 records of basic reduced diagnostic data datasets. Each database record corresponds to a specific snapshot in time for a selected discharge. Recently some profile datasets have been implemented. Diagnostic data are fit by a cubic spline or a parabola by the in-house ENERGY code to provide density, temperature, radiated power, effective charge (Z eff ), and rotation velocity profiles. These fits are stored in the profile datasets which are inputs for the ONETWO code which computes transport data. 3 refs., 4 figs

  13. Statistical analysis of the ASME KIc database

    International Nuclear Information System (INIS)

    Sokolov, M.A.

    1998-01-01

    The American Society of Mechanical Engineers (ASME) K Ic curve is a function of test temperature (T) normalized to a reference nil-ductility temperature, RT NDT , namely, T-RT NDT . It was constructed as the lower boundary to the available K Ic database. Being a lower bound to the unique but limited database, the ASME K Ic curve concept does not discuss probability matters. However, a continuing evolution of fracture mechanics advances has led to employment of the Weibull distribution function to model the scatter of fracture toughness values in the transition range. The Weibull statistic/master curve approach was applied to analyze the current ASME K Ic database. It is shown that the Weibull distribution function models the scatter in K Ic data from different materials very well, while the temperature dependence is described by the master curve. Probabilistic-based tolerance-bound curves are suggested to describe lower-bound K Ic values

  14. IPAD: the Integrated Pathway Analysis Database for Systematic Enrichment Analysis.

    Science.gov (United States)

    Zhang, Fan; Drabier, Renee

    2012-01-01

    Next-Generation Sequencing (NGS) technologies and Genome-Wide Association Studies (GWAS) generate millions of reads and hundreds of datasets, and there is an urgent need for a better way to accurately interpret and distill such large amounts of data. Extensive pathway and network analysis allow for the discovery of highly significant pathways from a set of disease vs. healthy samples in the NGS and GWAS. Knowledge of activation of these processes will lead to elucidation of the complex biological pathways affected by drug treatment, to patient stratification studies of new and existing drug treatments, and to understanding the underlying anti-cancer drug effects. There are approximately 141 biological human pathway resources as of Jan 2012 according to the Pathguide database. However, most currently available resources do not contain disease, drug or organ specificity information such as disease-pathway, drug-pathway, and organ-pathway associations. Systematically integrating pathway, disease, drug and organ specificity together becomes increasingly crucial for understanding the interrelationships between signaling, metabolic and regulatory pathway, drug action, disease susceptibility, and organ specificity from high-throughput omics data (genomics, transcriptomics, proteomics and metabolomics). We designed the Integrated Pathway Analysis Database for Systematic Enrichment Analysis (IPAD, http://bioinfo.hsc.unt.edu/ipad), defining inter-association between pathway, disease, drug and organ specificity, based on six criteria: 1) comprehensive pathway coverage; 2) gene/protein to pathway/disease/drug/organ association; 3) inter-association between pathway, disease, drug, and organ; 4) multiple and quantitative measurement of enrichment and inter-association; 5) assessment of enrichment and inter-association analysis with the context of the existing biological knowledge and a "gold standard" constructed from reputable and reliable sources; and 6) cross-linking of

  15. Plant databases and data analysis tools

    Science.gov (United States)

    It is anticipated that the coming years will see the generation of large datasets including diagnostic markers in several plant species with emphasis on crop plants. To use these datasets effectively in any plant breeding program, it is essential to have the information available via public database...

  16. ANALYSIS OF DECREASE MACHINABILITY POSSIBLE CAUSES FOR CLAIMED ALLOY

    Directory of Open Access Journals (Sweden)

    Nataša Náprstková

    2016-09-01

    Full Text Available The Faculty of Production Technology and Management is often asked by companies with a request to solve a specific technical task. One of these tasks was the analysis of aluminum alloy worsened machinability when the rods from this alloy exhibited against assumption significantly worse (longer chips during machining. The alloy was complaint and, of course, it created economic damage. Obviously, the company was interested in the causes of this alloy behavior change that could possibly generate future complaints procedures to defend itself better, or to avoid mistakes in the production of the material. At the faculty analysis that could contribute to identifying the cause of the worsened machinability were done.

  17. INCIDENCE AND PREVALENCE OF ACROMEGALY IN THE UNITED STATES: A CLAIMS-BASED ANALYSIS.

    Science.gov (United States)

    Broder, Michael S; Chang, Eunice; Cherepanov, Dasha; Neary, Maureen P; Ludlam, William H

    2016-11-01

    Acromegaly, a rare endocrine disorder, results from excessive growth hormone secretion, leading to multisystem-associated morbidities. Using 2 large nationwide databases, we estimated the annual incidence and prevalence of acromegaly in the U.S. We used 2008 to 2013 data from the Truven Health MarketScan ® Commercial Claims and Encounters Database and IMS Health PharMetrics healthcare insurance claims databases, with health plan enrollees acromegaly (International Classification of Diseases, 9th Revision, Clinical Modification Code [ICD-9CM] 253.0), or 1 claim with acromegaly and 1 claim for pituitary tumor, pituitary surgery, or cranial stereotactic radiosurgery. Annual incidence was calculated for each year from 2009 to 2013, and prevalence in 2013. Estimates were stratified by age and sex. Incidence was up to 11.7 cases per million person-years (PMPY) in MarketScan and 9.6 cases PMPY in PharMetrics. Rates were similar by sex but typically lowest in ≤17 year olds and higher in >24 year olds. The prevalence estimates were 87.8 and 71.0 per million per year in MarketScan and PharMetrics, respectively. Prevalence consistently increased with age but was similar by sex in each database. The current U.S. incidence of acromegaly may be up to 4 times higher and prevalence may be up to 50% higher than previously reported in European studies. Our findings correspond with the estimates reported by a recent U.S. study that used a single managed care database, supporting the robustness of these estimates in this population. Our study indicates there are approximately 3,000 new cases of acromegaly per year, with a prevalence of about 25,000 acromegaly patients in the U.S. CT = computed tomography GH = growth hormone IGF-1 = insulin-like growth factor 1 ICD-9-CM Code = International Classification of Diseases, 9th Revision, Clinical Modification Codes MRI = magnetic resonance imaging PMPY = per million person-years.

  18. A 12-year analysis of closed medical malpractice claims of the Taiwan civil court

    Science.gov (United States)

    Hwang, Chi-Yuan; Wu, Chien-Hung; Cheng, Fu-Cheng; Yen, Yung-Lin; Wu, Kuan-Han

    2018-01-01

    Abstract Malpractices lawsuits cause increased physician stress and decreased career satisfaction, which might result in defensive medicine for avoiding litigation. It is, consequently, important to learn experiences from previous malpractice claims. The aim of this study was to examine the epidemiologic factors related to medical malpractice claims, identify specialties at high risk of such claims, and determine clinical which errors tend to lead to medical malpractice lawsuits, by analyzing closed malpractice claims in the civil courts of Taiwan. The current analysis reviewed the verdicts of the Taiwan judicial system from a retrospective study using the population-based databank, focusing on 946 closed medical claims between 2002 and 2013. Among these medical malpractice claims, only 14.1% of the verdicts were against clinicians, with a mean indemnity payment of $83,350. The most common single specialty involved was obstetrics (10.7%), while the surgery group accounted for approximately 40% of the cases. In total, 46.3% of the patients named in the claims had either died or been gravely injured. Compared to the $75,632 indemnity for deceased patients, the mean indemnity payment for plaintiffs with grave outcomes was approximately 4.5 times higher. The diagnosis groups at high risk of malpractice litigation were infectious diseases (7.3%), malignancies (7.2%), and limb fractures (4.9%). A relatively low success rate was found in claims concerning undiagnosed congenital anomalies (4.5%) and infectious diseases (5.8%) group. A surgery dispute was the most frequent argument in civil malpractice claims (38.8%), followed by diagnosis error (19.3%). Clinicians represent 85.9% of the defendants who won their cases, but they spent an average of 4.7 years to reach final adjudication. Increased public education to prevent unrealistic expectations among patients is recommended to decrease frivolous lawsuits. Further investigation to improve the lengthy judicial process is

  19. Nuclear materials thermo-physical property database and property analysis using the database

    International Nuclear Information System (INIS)

    Jeong, Yeong Seok

    2002-02-01

    It is necessary that thermo-physical properties and understand of nuclear materials for evaluation and analysis to steady and accident states of commercial and research reactor. In this study, development of nuclear materials thermo-properties database and home page. In application of this database, it is analyzed of thermal conductivity, heat capacity, enthalpy, and linear thermal expansion of fuel and cladding material and compared thermo-properties model in nuclear fuel performance evaluation codes with experimental data in database. Results of compare thermo-property model of UO 2 fuel and cladding major performance evaluation code, both are similar

  20. The risk of malignancy among biologic-naïve pediatric psoriasis patients: A retrospective cohort study in a US claims database.

    Science.gov (United States)

    Gu, Yun; Nordstrom, Beth L

    2017-08-01

    Little published literature exists regarding malignancy risk in pediatric psoriasis patients. To compare malignancy risk in biologic-naïve pediatric psoriasis patients with a matched pediatric population without psoriasis. This retrospective cohort study used IMS LifeLink Health Plan Claims data covering 1998-2008. Cancer incidence was compared with the US Surveillance, Epidemiology, and End Results (SEER) data using standardized incidence ratios (SIR), and between cohorts using Cox models. Among 9045 pediatric psoriasis patients and 77,206 comparators, 18 probable or highly probable cancers were identified. Pediatric psoriasis patients had a nonsignificantly lower incidence than comparators (hazard ratio [HR] 0.43, 95% confidence interval [CI] 0.05-3.54). The HR increased to 1.67 (95% CI 0.54-5.18) when cancer diagnosed during the first 90 days of follow-up was included. The pediatric psoriasis cohort had a significantly increased lymphoma rate compared with SEER (SIR 5.42, 95% CI 1.62-12.94), but no significant increase relative to the comparator cohort. Misclassification of disease and outcome might have occurred with patients in the claims database. Patients with pediatric psoriasis showed no significant increase in overall cancer risk compared with those without psoriasis. A potential increased risk for lymphoma was observed when compared with the general population. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  1. [Quality in Revision Arthroplasty: A Comparison between Claims Data Analysis and External Quality Assurance].

    Science.gov (United States)

    Wessling, M; Gravius, S; Gebert, C; Smektala, R; Günster, C; Hardes, J; Rhomberg, I; Koller, D

    2016-02-01

    External quality assurance for revisions of total knee arthroplasty (TKA) and total hip arthroplasty (THA) are carried out through the AQUA institute in Germany. Data are collected by the providers and are analyzed based on predefined quality indicators from the hospital stay in which the revision was performed. The present study explores the possibility to add routine data analysis to the existing external quality assurance (EQS). Differences between methods are displayed. The study aims to quantify the benefit of an additional analysis that allows patients to be followed up beyond the hospitalization itself. All persons insured in an AOK sickness fund formed the population for analysis. Revisions were identified using the same algorithm as the existing external quality assurance. Adverse events were defined according to the AQUA indicators for the years 2008 to 2011.The hospital stay in which the revision took place and a follow-up of 30 days were included. For re-operation and dislocation we also defined a 365 days interval for additional follow-up. The results were compared to the external quality control reports. Almost all indicators showed higher events in claims data analysis than in external quality control. Major differences are seen for dislocation (EQS SD: 1.87 vs. claims data [cd] SD: 2.06 %, cd+30 d: 2.91 %, cd+365 d: 7.27 %) and reoperation (hip revision: EQS SD: 5.88 % vs. claims data SD: 8.79 % cd+30 d: 9.82 %, cd+365 d: 15.0 %/knee revision: EQS SD: 3.21 % vs. claims data SD: 4.07 %, cd+30 d: 4.6 %, cd+365 d: 15.43 %). Claims data could show additional adverse events for all indicators after the initial hospital stay, rising to 77 % of all events. The number of adverse events differs between the existing external quality control and our claims data analysis. Claims data give the opportunity to complement existing methods of quality control though a longer follow-up, when many complications become evident. Georg

  2. Development and validation of an algorithm for identifying urinary retention in a cohort of patients with epilepsy in a large US administrative claims database.

    Science.gov (United States)

    Quinlan, Scott C; Cheng, Wendy Y; Ishihara, Lianna; Irizarry, Michael C; Holick, Crystal N; Duh, Mei Sheng

    2016-04-01

    The aim of this study was to develop and validate an insurance claims-based algorithm for identifying urinary retention (UR) in epilepsy patients receiving antiepileptic drugs to facilitate safety monitoring. Data from the HealthCore Integrated Research Database(SM) in 2008-2011 (retrospective) and 2012-2013 (prospective) were used to identify epilepsy patients with UR. During the retrospective phase, three algorithms identified potential UR: (i) UR diagnosis code with a catheterization procedure code; (ii) UR diagnosis code alone; or (iii) diagnosis with UR-related symptoms. Medical records for 50 randomly selected patients satisfying ≥1 algorithm were reviewed by urologists to ascertain UR status. Positive predictive value (PPV) and 95% confidence intervals (CI) were calculated for the three component algorithms and the overall algorithm (defined as satisfying ≥1 component algorithms). Algorithms were refined using urologist review notes. In the prospective phase, the UR algorithm was refined using medical records for an additional 150 cases. In the retrospective phase, the PPV of the overall algorithm was 72.0% (95%CI: 57.5-83.8%). Algorithm 3 performed poorly and was dropped. Algorithm 1 was unchanged; urinary incontinence and cystitis were added as exclusionary diagnoses to Algorithm 2. The PPV for the modified overall algorithm was 89.2% (74.6-97.0%). In the prospective phase, the PPV for the modified overall algorithm was 76.0% (68.4-82.6%). Upon adding overactive bladder, nocturia and urinary frequency as exclusionary diagnoses, the PPV for the final overall algorithm was 81.9% (73.7-88.4%). The current UR algorithm yielded a PPV > 80% and could be used for more accurate identification of UR among epilepsy patients in a large claims database. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Physics analysis database for the DIII-D tokamak

    International Nuclear Information System (INIS)

    Schissel, D.P.; Bramson, G.; DeBoo, J.C.

    1986-01-01

    The authors report on a centralized database for handling reduced data for physics analysis implemented for the DIII-D tokamak. Each database record corresponds to a specific snapshot in time for a selected discharge. Features of the database environment include automatic updating, data integrity checks, and data traceability. Reduced data from each diagnostic comprises a dedicated data bank (a subset of the database) with quality assurance provided by a physicist. These data banks will be used to create profile banks which will be input to a transport code to create a transport bank. Access to the database is initially through FORTRAN programs. One user interface, PLOTN, is a command driven program to select and display data subsets. Another user interface, PROF, compares and displays profiles. The database is implemented on a Digital Equipment Corporation VAX 8600 running VMS

  4. Analysis of the Romanian Insurance Market Based on Ensuring and Exercising Consumers` Right to Claim

    Directory of Open Access Journals (Sweden)

    Dan Armeanu

    2014-05-01

    Full Text Available In the financial market of insurance, consumer protection represents an important component contributing to the stability, discipline and efficiency of the market. In this respect, the activity of educating and informing insurance consumers on ensuring and exercising their right to claim plays a leading role in the mechanism of consumer protection. This study aims to improve the decision-making capacity of the financial services consumers from the Romanian insurance market through better information on ensuring and exercising their right to claim under the legislation. Thus, by applying three data analysis techniques – principal components analysis, cluster analysis and discriminant analysis – to the data regarding the petitions that were registered by the 41 insurance companies which operated in the Romanian market in 2012, a classification that assesses the insurance market transparency is achieved, resulting in a better information for consumers and, hence, the improvement of their protection through reducing the level of transactions that are harmful to consumers

  5. Analysis/design of tensile property database system

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, D. H.; Jeon, I.; Lyu, W. S.

    2001-01-01

    The data base construction using the data produced from tensile experiment can increase the application of test results. Also, we can get the basic data ease from database when we prepare the new experiment and can produce high quality result by compare the previous data. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, the analysis and design was performed to develop the database for tensile extension property

  6. Incorporating the Last Four Digits of Social Security Numbers Substantially Improves Linking Patient Data from De-identified Hospital Claims Databases.

    Science.gov (United States)

    Naessens, James M; Visscher, Sue L; Peterson, Stephanie M; Swanson, Kristi M; Johnson, Matthew G; Rahman, Parvez A; Schindler, Joe; Sonneborn, Mark; Fry, Donald E; Pine, Michael

    2015-08-01

    Assess algorithms for linking patients across de-identified databases without compromising confidentiality. Hospital discharges from 11 Mayo Clinic hospitals during January 2008-September 2012 (assessment and validation data). Minnesota death certificates and hospital discharges from 2009 to 2012 for entire state (application data). Cross-sectional assessment of sensitivity and positive predictive value (PPV) for four linking algorithms tested by identifying readmissions and posthospital mortality on the assessment data with application to statewide data. De-identified claims included patient gender, birthdate, and zip code. Assessment records were matched with institutional sources containing unique identifiers and the last four digits of Social Security number (SSNL4). Gender, birthdate, and five-digit zip code identified readmissions with a sensitivity of 98.0 percent and a PPV of 97.7 percent and identified postdischarge mortality with 84.4 percent sensitivity and 98.9 percent PPV. Inclusion of SSNL4 produced nearly perfect identification of readmissions and deaths. When applied statewide, regions bordering states with unavailable hospital discharge data had lower rates. Addition of SSNL4 to administrative data, accompanied by appropriate data use and data release policies, can enable trusted repositories to link data with nearly perfect accuracy without compromising patient confidentiality. States maintaining centralized de-identified databases should add SSNL4 to data specifications. © Health Research and Educational Trust.

  7. Immune Epitope Database and Analysis Resource (IEDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — This repository contains antibody/B cell and T cell epitope information and epitope prediction and analysis tools for use by the research community worldwide. Immune...

  8. Lawsuits After Primary and Revision Total Hip Arthroplasties: A Malpractice Claims Analysis.

    Science.gov (United States)

    Patterson, Diana C; Grelsamer, Ronald P; Bronson, Michael J; Moucha, Calin S

    2017-10-01

    As the prevalence of total hip arthroplasty (THA) expands, so too will complications and patient dissatisfaction. The goal of this study was to identify the common etiologies of malpractice suits and costs of claims after primary and revision THAs. Analysis of 115 malpractice claims filed for alleged neglectful primary and revision THA surgeries by orthopedic surgeons insured by a large New York state malpractice carrier between 1983 and 2011. The incidence of malpractice claims filed for negligent THA procedures is only 0.15% per year in our population. In primary cases, nerve injury ("foot drop") was the most frequent allegation with 27 claims. Negligent surgery causing dislocation was alleged in 18 and leg length discrepancy in 14. Medical complications were also reported, including 3 thromboembolic events and 6 deaths. In revision cases, dislocation and infection were the most common source of suits. The average indemnity payment was $386,153 and the largest single settlement was $4.1 million for an arterial injury resulting in amputation after a primary hip replacement. The average litigation cost to the insurer was $61,833. Nerve injury, dislocation, and leg length discrepancy are the most common reason for malpractice after primary THA. Orthopedic surgeons should continue to focus on minimizing the occurrence of these complications while adequately incorporating details about the risks and limitations of surgery into their preoperative education. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Analysis of functionality free CASE-tools databases design

    Directory of Open Access Journals (Sweden)

    A. V. Gavrilov

    2016-01-01

    Full Text Available The introduction in the educational process of database design CASEtechnologies requires the institution of significant costs for the purchase of software. A possible solution could be the use of free software peers. At the same time this kind of substitution should be based on even-com representation of the functional characteristics and features of operation of these programs. The purpose of the article – a review of the free and non-profi t CASE-tools database design, as well as their classifi cation on the basis of the analysis functionality. When writing this article were used materials from the offi cial websites of the tool developers. Evaluation of the functional characteristics of CASEtools for database design made exclusively empirically with the direct work with software products. Analysis functionality of tools allow you to distinguish the two categories CASE-tools database design. The first category includes systems with a basic set of features and tools. The most important basic functions of these systems are: management connections to database servers, visual tools to create and modify database objects (tables, views, triggers, procedures, the ability to enter and edit data in table mode, user and privilege management tools, editor SQL-code, means export/import data. CASE-system related to the first category can be used to design and develop simple databases, data management, as well as a means of administration server database. A distinctive feature of the second category of CASE-tools for database design (full-featured systems is the presence of visual designer, allowing to carry out the construction of the database model and automatic creation of the database on the server based on this model. CASE-system related to this categories can be used for the design and development of databases of any structural complexity, as well as a database server administration tool. The article concluded that the

  10. Impact of statins on risk of new onset diabetes mellitus: a population-based cohort study using the Korean National Health Insurance claims database

    Directory of Open Access Journals (Sweden)

    Lee J

    2016-10-01

    Full Text Available Jimin Lee,1 Yoojin Noh,1 Sooyoung Shin,1 Hong-Seok Lim,2 Rae Woong Park,3 Soo Kyung Bae,4 Euichaul Oh,4 Grace Juyun Kim,5 Ju Han Kim,5 Sukhyang Lee1 1Division of Clinical Pharmacy, College of Pharmacy, Ajou University, Suwon, South Korea; 2Department of Cardiology, School of Medicine, Ajou University, Suwon, South Korea; 3Department of Biomedical Informatics, School of Medicine, Ajou University, Suwon, South Korea; 4Division of Pharmaceutical Sciences, College of Pharmacy, The Catholic University of Korea, Bucheon, South Korea; 5Division of Biomedical Informatics, College of Medicine, Seoul National University, Seoul, South Korea Abstract: Statin therapy is beneficial in reducing cardiovascular events and mortalities in patients with atherosclerotic cardiovascular diseases. Yet, there have been concerns of increased risk of diabetes with statin use. This study was aimed to evaluate the association between statins and new onset diabetes mellitus (NODM in patients with ischemic heart disease (IHD utilizing the Korean Health Insurance Review and Assessment Service claims database. Among adult patients with preexisting IHD, new statin users and matched nonstatin users were identified on a 1:1 ratio using proportionate stratified random sampling by sex and age. They were subsequently propensity score matched further with age and comorbidities to reduce the selection bias. Overall incidence rates, cumulative rates and hazard ratios (HRs between statin use and occurrence of NODM were estimated. The subgroup analyses were performed according to sex, age groups, and the individual agents and intensities of statins. A total of 156,360 patients (94,370 in the statin users and 61,990 in the nonstatin users were included in the analysis. The incidence rates of NODM were 7.8% and 4.8% in the statin users and nonstatin users, respectively. The risk of NODM was higher among statin users (crude HR 2.01, 95% confidence interval [CI] 1.93–2.10; adjusted HR 1

  11. Utilization of genetic tests: analysis of gene-specific billing in Medicare claims data.

    Science.gov (United States)

    Lynch, Julie A; Berse, Brygida; Dotson, W David; Khoury, Muin J; Coomer, Nicole; Kautter, John

    2017-08-01

    We examined the utilization of precision medicine tests among Medicare beneficiaries through analysis of gene-specific tier 1 and 2 billing codes developed by the American Medical Association in 2012. We conducted a retrospective cross-sectional study. The primary source of data was 2013 Medicare 100% fee-for-service claims. We identified claims billed for each laboratory test, the number of patients tested, expenditures, and the diagnostic codes indicated for testing. We analyzed variations in testing by patient demographics and region of the country. Pharmacogenetic tests were billed most frequently, accounting for 48% of the expenditures for new codes. The most common indications for testing were breast cancer, long-term use of medications, and disorders of lipid metabolism. There was underutilization of guideline-recommended tumor mutation tests (e.g., epidermal growth factor receptor) and substantial overutilization of a test discouraged by guidelines (methylenetetrahydrofolate reductase). Methodology-based tier 2 codes represented 15% of all claims billed with the new codes. The highest rate of testing per beneficiary was in Mississippi and the lowest rate was in Alaska. Gene-specific billing codes significantly improved our ability to conduct population-level research of precision medicine. Analysis of these data in conjunction with clinical records should be conducted to validate findings.Genet Med advance online publication 26 January 2017.

  12. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Divisions's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed off line from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving off-line data analysis environment on the USC computers

  13. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  14. Using SQL Databases for Sequence Similarity Searching and Analysis.

    Science.gov (United States)

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  15. Immune epitope database analysis resource (IEDB-AR)

    DEFF Research Database (Denmark)

    Zhang, Qing; Wang, Peng; Kim, Yohan

    2008-01-01

    We present a new release of the immune epitope database analysis resource (IEDB-AR, http://tools.immuneepitope.org), a repository of web-based tools for the prediction and analysis of immune epitopes. New functionalities have been added to most of the previously implemented tools, and a total...

  16. Analysis of Patent Databases Using VxInsight

    Energy Technology Data Exchange (ETDEWEB)

    BOYACK,KEVIN W.; WYLIE,BRIAN N.; DAVIDSON,GEORGE S.; JOHNSON,DAVID K.

    2000-12-12

    We present the application of a new knowledge visualization tool, VxInsight, to the mapping and analysis of patent databases. Patent data are mined and placed in a database, relationships between the patents are identified, primarily using the citation and classification structures, then the patents are clustered using a proprietary force-directed placement algorithm. Related patents cluster together to produce a 3-D landscape view of the tens of thousands of patents. The user can navigate the landscape by zooming into or out of regions of interest. Querying the underlying database places a colored marker on each patent matching the query. Automatically generated labels, showing landscape content, update continually upon zooming. Optionally, citation links between patents may be shown on the landscape. The combination of these features enables powerful analyses of patent databases.

  17. Increased Risk of Hospitalization for Heart Failure with Newly Prescribed Dipeptidyl Peptidase-4 Inhibitors and Pioglitazone Using the Korean Health Insurance Claims Database

    Directory of Open Access Journals (Sweden)

    Sunghwan Suh

    2015-06-01

    Full Text Available BackgroundWe assessed the association of dipeptidyl peptidase 4 inhibitors (DPP4i with hospitalization for heart failure (HF using the Korean Health Insurance claims database.MethodsWe collected data on newly prescribed sitagliptin, vildagliptin, and pioglitazone between January 1, 2009 and December 31, 2012 (mean follow-up of 336.8 days to 935,519 patients with diabetes (518,614 males and 416,905 females aged 40 to 79 years (mean age of 59.4 years.ResultsDuring the study, 998 patients were hospitalized for primary HF (115.7 per 100,000 patient-years. The incidence rate of hospitalization for HF was 117.7 per 100,000 per patient-years among patients on pioglitazone, 105.7 for sitagliptin, and 135.8 for vildagliptin. The hospitalization rate for HF was greatest in the first 30 days after starting the medication, which corresponded to a significantly higher incidence at days 0 to 30 compared with days 31 to 360 for all three drugs. The hazard ratios were 1.85 (pioglitazone, 2.00 (sitagliptin, and 1.79 (vildagliptin. The incidence of hospitalization for HF did not differ between the drugs for any time period.ConclusionThis study showed an increase in hospitalization for HF in the initial 30 days of the DPP4i and pioglitazone compared with the subsequent follow-up period. However, the differences between the drugs were not significant.

  18. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    Science.gov (United States)

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  19. Analysis of technologies databases use in physical education and sport

    Directory of Open Access Journals (Sweden)

    Usychenko V.V.

    2010-03-01

    Full Text Available Analysis and systematization is conducted scientific methodical and the special literature. The questions of the use of technology of databases rise in the system of preparation of sportsmen. The necessity of application of technologies of operative treatment of large arrays of sporting information is rotined. Collected taking on the use of computer-aided technologies of account and analysis of results of testing of parameters of training process. The question of influence of technologies is considered on training and competition activity. A database is presented «Athlete». A base contains anthropometric and myometrical indexes of sportsmen of bodybuilding of high qualification.

  20. A data analysis expert system for large established distributed databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  1. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  2. RNA STRAND: The RNA Secondary Structure and Statistical Analysis Database

    Directory of Open Access Journals (Sweden)

    Andronescu Mirela

    2008-08-01

    Full Text Available Abstract Background The ability to access, search and analyse secondary structures of a large set of known RNA molecules is very important for deriving improved RNA energy models, for evaluating computational predictions of RNA secondary structures and for a better understanding of RNA folding. Currently there is no database that can easily provide these capabilities for almost all RNA molecules with known secondary structures. Results In this paper we describe RNA STRAND – the RNA secondary STRucture and statistical ANalysis Database, a curated database containing known secondary structures of any type and organism. Our new database provides a wide collection of known RNA secondary structures drawn from public databases, searchable and downloadable in a common format. Comprehensive statistical information on the secondary structures in our database is provided using the RNA Secondary Structure Analyser, a new tool we have developed to analyse RNA secondary structures. The information thus obtained is valuable for understanding to which extent and with which probability certain structural motifs can appear. We outline several ways in which the data provided in RNA STRAND can facilitate research on RNA structure, including the improvement of RNA energy models and evaluation of secondary structure prediction programs. In order to keep up-to-date with new RNA secondary structure experiments, we offer the necessary tools to add solved RNA secondary structures to our database and invite researchers to contribute to RNA STRAND. Conclusion RNA STRAND is a carefully assembled database of trusted RNA secondary structures, with easy on-line tools for searching, analyzing and downloading user selected entries, and is publicly available at http://www.rnasoft.ca/strand.

  3. Assessment of the SFC database for analysis and modeling

    Science.gov (United States)

    Centeno, Martha A.

    1994-01-01

    SFC is one of the four clusters that make up the Integrated Work Control System (IWCS), which will integrate the shuttle processing databases at Kennedy Space Center (KSC). The IWCS framework will enable communication among the four clusters and add new data collection protocols. The Shop Floor Control (SFC) module has been operational for two and a half years; however, at this stage, automatic links to the other 3 modules have not been implemented yet, except for a partial link to IOS (CASPR). SFC revolves around a DB/2 database with PFORMS acting as the database management system (DBMS). PFORMS is an off-the-shelf DB/2 application that provides a set of data entry screens and query forms. The main dynamic entity in the SFC and IOS database is a task; thus, the physical storage location and update privileges are driven by the status of the WAD. As we explored the SFC values, we realized that there was much to do before actually engaging in continuous analysis of the SFC data. Half way into this effort, it was realized that full scale analysis would have to be a future third phase of this effort. So, we concentrated on getting to know the contents of the database, and in establishing an initial set of tools to start the continuous analysis process. Specifically, we set out to: (1) provide specific procedures for statistical models, so as to enhance the TP-OAO office analysis and modeling capabilities; (2) design a data exchange interface; (3) prototype the interface to provide inputs to SCRAM; and (4) design a modeling database. These objectives were set with the expectation that, if met, they would provide former TP-OAO engineers with tools that would help them demonstrate the importance of process-based analyses. The latter, in return, will help them obtain the cooperation of various organizations in charting out their individual processes.

  4. Analysis of the claim to distinct nursing ethics: normative and nonnormative approaches.

    Science.gov (United States)

    Twomey, J G

    1989-04-01

    Nursing ethics has been declared to exist only as a subset of medical ethics. If this statement is to be refuted, any defense of a claim to a discrete nursing ethic must clarify what type of moral theory is being held as distinctly a nursing ethic. Several examples of nursing ethical theory are used to challenge Veatch's view that nursing ethics is a subset of medical ethics. The methodology used to provide the analysis is to group the ethical theories studied under the topics of normative and nonnormative ethics to provide for appropriate inquiry.

  5. Considerations for the analysis of longitudinal electronic health records linked to claims data to study the effectiveness and safety of drugs.

    Science.gov (United States)

    Lin, K J; Schneeweiss, S

    2016-08-01

    Health insurance claims and electronic health records (EHR) databases have been considered the preferred data sources with which to study drug safety and effectiveness in routine care. Linking claims data to EHR allows researchers to leverage the complementary advantages of each data source to enhance study validity. We propose a framework to evaluate the need for supplementing claims data with EHR and vice versa to optimize outcome ascertainment, exposure assessment, and confounding adjustment. © 2016 American Society for Clinical Pharmacology and Therapeutics.

  6. Causal Analysis of Databases Concerning Electromagnetism and Health

    Directory of Open Access Journals (Sweden)

    Kristian Alonso-Stenberg

    2016-12-01

    Full Text Available In this article, we conducted a causal analysis of a system extracted from a database of current data in the telecommunications domain, namely the Eurobarometer 73.3 database arose from a survey of 26,602 citizens EU on the potential health effects that electromagnetic fields can produce. To determine the cause-effect relationships between variables, we represented these data by a directed graph that can be applied to a qualitative version of the theory of discrete chaos to highlight causal circuits and attractors, as these are basic elements of system behavior.

  7. Construction of an integrated database to support genomic sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, W.; Overbeek, R.

    1994-11-01

    The central goal of this project is to develop an integrated database to support comparative analysis of genomes including DNA sequence data, protein sequence data, gene expression data and metabolism data. In developing the logic-based system GenoBase, a broader integration of available data was achieved due to assistance from collaborators. Current goals are to easily include new forms of data as they become available and to easily navigate through the ensemble of objects described within the database. This report comments on progress made in these areas.

  8. Analysis of technologies databases use in physical education and sport

    OpenAIRE

    Usychenko V.V.; Byshevets N.G.

    2010-01-01

    Analysis and systematization is conducted scientific methodical and the special literature. The questions of the use of technology of databases rise in the system of preparation of sportsmen. The necessity of application of technologies of operative treatment of large arrays of sporting information is rotined. Collected taking on the use of computer-aided technologies of account and analysis of results of testing of parameters of training process. The question of influence of technologies is ...

  9. Epidemiology of overdose episodes from the period prior to hospitalization for drug poisoning until discharge in Japan: An exploratory descriptive study using a nationwide claims database.

    Science.gov (United States)

    Okumura, Yasuyuki; Sakata, Nobuo; Takahashi, Kunihiko; Nishi, Daisuke; Tachimori, Hisateru

    2017-08-01

    Little is known about the nationwide epidemiology of the annual rate, causative substance, and clinical course of overdose-related admission. We aimed to describe the epidemiology of overdose episodes from the period prior to hospitalization for drug poisoning until discharge to home. We assessed all cases of admission due to overdose (21,663 episodes) in Japan from October 2012 through September 2013 using the National Database of Health Insurance Claims and Specific Health Checkups of Japan. The annual rate of overdose admission was 17.0 per 100,000 population. Women exhibited two peaks in admission rates at 19-34 years (40.9 per 100,000) and ≥75 years (27.8 per 100,000). Men exhibited one peak in the admission rate at ≥75 years (23.7 per 100,000). Within 90 days prior to overdose, ≥60% and ≥9% of patients aged 19-49 years received a prescription for benzodiazepines and barbiturates, respectively. In addition, 59% of patients aged ≥75 years received a prescription for benzodiazepines prior to overdose, 47% had a history of congestive heart failure, and 24% had a diagnosis of poisoning by cardiovascular drugs. The proportion of patients with recent psychiatric treatments decreased with age (65.1% in those aged 35-49 years and 13.9% in those aged ≥75 years). The findings emphasize the need for overdose prevention programs that focus on psychiatric patients aged 19-49 years who are prescribed benzodiazepines or barbiturates and on non-psychiatric patients aged ≥75 years who are prescribed benzodiazepines or digitalis. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  10. Hyaluronic Acid Injections Are Associated with Delay of Total Knee Replacement Surgery in Patients with Knee Osteoarthritis: Evidence from a Large U.S. Health Claims Database.

    Directory of Open Access Journals (Sweden)

    Roy Altman

    Full Text Available The growing prevalence of osteoarthritis (OA and the medical costs associated with total knee replacement (TKR surgery for end-stage OA motivate a search for agents that can delay OA progression. We test a hypothesis that hyaluronic acid (HA injection is associated with delay of TKR in a dose-dependent manner.We retrospectively evaluated records in an administrative claims database of ~79 million patients, to identify all patients with knee OA who received TKR during a 6-year period. Only patients with continuous plan enrollment from diagnosis until TKR were included, so that complete medical records were available. OA diagnosis was the index event and we evaluated time-to-TKR as a function of the number of HA injections. The database included 182,022 patients with knee OA who had TKR; 50,349 (27.7% of these patients were classified as HA Users, receiving ≥1 courses of HA prior to TKR, while 131,673 patients (72.3% were HA Non-users prior to TKR, receiving no HA. Cox proportional hazards modelling shows that TKR risk decreases as a function of the number of HA injection courses, if patient age, gender, and disease comorbidity are used as background covariates. Multiple HA injections are therefore associated with delay of TKR (all, P < 0.0001. Half of HA Non-users had a TKR by 114 days post-diagnosis of knee OA, whereas half of HA Users had a TKR by 484 days post-diagnosis (χ2 = 19,769; p < 0.0001. Patients who received no HA had a mean time-to-TKR of 0.7 years; with one course of HA, the mean time to TKR was 1.4 years (χ2 = 13,725; p < 0.0001; patients who received ≥5 courses delayed TKR by 3.6 years (χ2 = 19,935; p < 0.0001.HA injection in patients with knee OA is associated with a dose-dependent increase in time-to-TKR.

  11. SINEBase: a database and tool for SINE analysis.

    Science.gov (United States)

    Vassetzky, Nikita S; Kramerov, Dmitri A

    2013-01-01

    SINEBase (http://sines.eimb.ru) integrates the revisited body of knowledge about short interspersed elements (SINEs). A set of formal definitions concerning SINEs was introduced. All available sequence data were screened through these definitions and the genetic elements misidentified as SINEs were discarded. As a result, 175 SINE families have been recognized in animals, flowering plants and green algae. These families were classified by the modular structure of their nucleotide sequences and the frequencies of different patterns were evaluated. These data formed the basis for the database of SINEs. The SINEBase website can be used in two ways: first, to explore the database of SINE families, and second, to analyse candidate SINE sequences using specifically developed tools. This article presents an overview of the database and the process of SINE identification and analysis.

  12. Characterization analysis database system (CADS). A system overview

    International Nuclear Information System (INIS)

    1997-12-01

    The CADS database is a standardized, quality-assured, and configuration-controlled data management system developed to assist in the task of characterizing the DOE surplus HEU material. Characterization of the surplus HEU inventory includes identifying the specific material; gathering existing data about the inventory; defining the processing steps that may be necessary to prepare the material for transfer to a blending site; and, ultimately, developing a range of the preliminary cost estimates for those processing steps. Characterization focuses on producing commercial reactor fuel as the final step in material disposition. Based on the project analysis results, the final determination will be made as to the viability of the disposition path for each particular item of HEU. The purpose of this document is to provide an informational overview of the CADS database, its evolution, and its current capabilities. This document describes the purpose of CADS, the system requirements it fulfills, the database structure, and the operational guidelines of the system

  13. Infant-Directed Media: An Analysis of Product Information and Claims

    Science.gov (United States)

    Fenstermacher, Susan K.; Barr, Rachel; Salerno, Katherine; Garcia, Amaya; Shwery, Clay E.; Calvert, Sandra L.; Linebarger, Deborah L.

    2010-01-01

    Infant DVDs typically have titles and even company names that imply some educational benefit. It is not known whether these educational claims are reflected in actual content. The present study examined this question. Of 686 claims (across 58 programs) listed on packaging, websites and promotional materials, implicit claims were most frequent…

  14. Lessons to be learned: a retrospective analysis of physiotherapy injury claims.

    Science.gov (United States)

    Johnson, Gillian M; Skinner, Margot A; Stephen, Rachel E

    2012-08-01

    Retrospective, descriptive analysis. To describe the prevalence and nature of insurance claims for injuries attributed to physiotherapy care. In New Zealand, a national insurance scheme, the Accident Compensation Corporation, provides comprehensive, no-fault personal injury coverage. The patterns of injury sustained during physiotherapy care have not previously been described. De-identified data for all injuries registered with the Accident Compensation Corporation from 2005 to 2010 and attributed to physiotherapy were accessed. Prevalence patterns (percentages) of new-claim data were determined for physiotherapy intervention category, injury site, nature of injury, age, and sex. A subcategory, exercise-related injuries, was analyzed according to injury site and whether the injury was related (primary) or unrelated (secondary) to the intended therapeutic goal. There were 279 claims related to physiotherapy care filed with the Accident Compensation Corporation during the studied reporting period. Injury was attributed predominantly to exercise (n = 88, 31.5% of cases) and manual therapy (n = 74, 26.5% of cases). The prevalence of events categorized as exercise related was greatest in those who were 55 to 59 years of age (n = 14, 16.3%) and greater in females (n = 47, 54.7%). Of the exercise-related injuries, 39.8% were in the lower-limb region and 35.2% were categorized as sprains/strains. Injuries attributed to exercise exceeded those linked to other therapies provided by physiotherapists, yet exercise therapy rarely features as a cause of adverse events reported to the physiotherapy profession. The proportion of exercise-related injury events underlines the need for ensuring safe and careful consideration of exercise prescription. Harm, level 4.

  15. Complications After Mastectomy and Immediate Breast Reconstruction for Breast Cancer: A Claims-Based Analysis

    Science.gov (United States)

    Jagsi, Reshma; Jiang, Jing; Momoh, Adeyiza O.; Alderman, Amy; Giordano, Sharon H.; Buchholz, Thomas A.; Pierce, Lori J.; Kronowitz, Steven J.; Smith, Benjamin D.

    2016-01-01

    Objective To evaluate complications after post-mastectomy breast reconstruction, particularly in the setting of adjuvant radiotherapy. Summary-Background Data Most studies of complications after breast reconstruction have been conducted at centers of excellence; relatively little is known about complication rates in radiated patients treated in the broader community. This information is relevant for breast cancer patients' decision-making. Methods Using the claims-based MarketScan database, we described complications in 14,894 women undergoing mastectomy for breast cancer from 1998-2007 who received immediate autologous reconstruction (n=2637), immediate implant-based reconstruction (n=3007), or no reconstruction within the first two postoperative years (n=9250). We used a generalized estimating equation to evaluate associations between complications and radiotherapy over time. Results Wound complications were diagnosed within the first two postoperative years in 2.3% of patients without reconstruction, 4.4% with implants, and 9.5% with autologous reconstruction (pimplants, and 20.7% with autologous reconstruction (pimplant removal in patients with implant reconstruction (OR 1.48, pbreast reconstruction differ by approach. Radiation therapy appears to modestly increase certain risks, including infection and implant removal. PMID:25876011

  16. Adult attention-deficit hyperactivity disorder: A database analysis of South African private health insurance

    Directory of Open Access Journals (Sweden)

    Renata Schoeman

    2017-01-01

    Full Text Available Background: Adult attention-deficit hyperactivity disorder (ADHD is a chronic, costly and debilitating disorder. In South Africa (SA, access to funding for care and treatment of ADHD is limited, and research is lacking. Aim: This study aimed to establish the current situation with regard to the psychiatric management of and funding for treatment of adult ADHD in the private sector in SA. Methods: A diagnostically refined retrospective claims database analysis was conducted. We examined the prevalence, costs and funding profile of claims over a 2-year period for adult beneficiaries with possible ADHD of a large medical administrator in SA. Results: The prevalence of adult ADHD was lower than published international rates. The presence of adult ADHD increased the prevalence of comorbidity and doubled the health care costs of beneficiaries. Contrary to public belief, comorbidities (including their medicine costs rather than psychiatric services or medicines were the main cost drivers. Conclusion: The current private health insurance funding model for ADHD limits access to funding. This affects early diagnosis and optimal treatment, thereby escalating long-term costs. Improved outcomes are possible if patients suffering from ADHD receive timely and accurate diagnosis, and receive chronic and comprehensive care. Balanced regulation is proposed to minimise the risk to both medical schemes and patients. A collaborative approach between stakeholders is needed to develop an alternative cost-effective funding model to improve access to treatment and quality of life for adults with ADHD in SA.

  17. Consumers' Views Regarding Health Claims on Food Packages. Contextual Analysis by Means of Computer Support

    Directory of Open Access Journals (Sweden)

    Eva Gunilla Svederberg

    2002-01-01

    Full Text Available Several studies have shown consumers to generally have only a limited understanding of the nutritional information on packaged-food labels. This suggests it is difficult for them to select properly between different foods on the basis of such information. As a basis for information on the requirements of groups of consumers, the present study aimed at investigating how, when presented with health claims and other nutritional information on the labels of food products, consumers' thinking about foods is affected by various background factors as well as by various types of food-related experiences. Semi-structured interviews of 30 consumers in Sweden—men and women aged 25 to 64, with and without food-related health problems—were carried out. The interviews were tape-recorded and were transcribed word-for-word. In the analysis of the interview data, the qualitative methodology of contextual analysis was utilised. For the purpose of method development, the computer programme Atlas.ti was used to support the analysis. The objective of this article is to show step by step how the analysis was carried out. In connection with the analysis, some results are presented. However, the focus in the article is on methodology. The conclusion drawn is that Atlas.ti has qualities that can facilitate the contextual analysis of the interview data. URN: urn:nbn:de:0114-fqs0201109

  18. Evaluation of Real-World Experience with Tofacitinib Compared with Adalimumab, Etanercept, and Abatacept in RA Patients with 1 Previous Biologic DMARD: Data from a U.S. Administrative Claims Database.

    Science.gov (United States)

    Harnett, James; Gerber, Robert; Gruben, David; Koenig, Andrew S; Chen, Connie

    2016-12-01

    Real-world data comparing tofacitinib with biologic disease-modifying antirheumatic drugs (bDMARDs) are limited. To compare characteristics, treatment patterns, and costs of patients with rheumatoid arthritis (RA) receiving tofacitinib versus the most common bDMARDs (adalimumab [ADA], etanercept [ETN], and abatacept [ABA]) following a single bDMARD in a U.S. administrative claims database. This study was a retrospective cohort analysis of patients aged ≥ 18 years with an RA diagnosis (ICD-9-CM codes 714.0x-714.4x; 714.81) and 1 previous bDMARD filling ≥ 1 tofacitinib or bDMARD claim in the Truven MarketScan Commercial and Medicare Supplemental claims databases (November 1, 2012-October 31, 2014). Monotherapy was defined as absence of conventional synthetic DMARDs within 90 days post-index. Persistence was evaluated using a 60-day gap. Adherence was assessed using proportion of days covered (PDC). RA-related total, pharmacy, and medical costs were evaluated in the 12-month pre- and post-index periods. Treatment patterns and costs were adjusted using linear models including a common set of clinically relevant variables of interest (e.g., previous RA treatments), which were assessed separately using t-tests and chi-squared tests. Overall, 392 patients initiated tofacitinib; 178 patients initiated ADA; 118 patients initiated ETN; and 191 patients initiated ABA. Tofacitinib patients were older versus ADA patients (P = 0.0153) and had a lower proportion of Medicare supplemental patients versus ABA patients (P = 0.0095). Twelve-month pre-index bDMARD use was greater in tofacitinib patients (77.6%) versus bDMARD cohorts (47.6%-59.6%). Tofacitinib patients had greater 12-month pre-index RA-related total costs versus bDMARD cohorts (all P 0.10) proportion of patients were persistent with tofacitinib (42.6%) versus ADA (37.6%), ETN (42.4%), and ABA (43.5%). Mean PDC was 0.55 for tofacitinib versus 0.57 (ADA), 0.59 (ETN), and 0.44 (ABA; P = 0.0003). Adjusted analyses

  19. Treatment patterns in multiple sclerosis: administrative claims analysis over 10 years.

    Science.gov (United States)

    Oleen-Burkey, MerriKay; Cyhaniuk, Anissa; Swallow, Eric

    2013-01-01

    Treatment patterns for the MS disease-modifying therapies (DMT) have changed over time. The objective of this study was to examine and describe treatment patterns in MS over a 10-year period. MS patients who filled a DMT prescription between January 1, 2001 and December 31, 2010 were identified from Clinformatics for DataMart affiliated with OptumInsight. Two cohorts were identified: those with a DMT prescription in 2003 and those with a DMT prescription in 2008. Treatment patterns were examined 2 years before and after the anchor prescriptions for each cohort. Comparing treatment patterns prior to the two anchor prescriptions, interferon-beta (IFNβ)-1a IM (Avonex) and IFNβ-1b (Betaseron) gained the most users in 2001-2003, while IFNβ-1a IM and IFNβ-1a SC (Rebif) gained the most users from 2006-2008. In the 2 years following the two anchor prescriptions, treatment patterns changed. From 2003-2005, 21.2% of IFNβ-1a SC users and more than 15.0% of IFNβ-1a IM and IFNβ-1b users changed to another interferon or glatiramer acetate (GA; Copaxone), while 12.5% of GA users changed to an interferon, most often IFNβ-1a SC. From 2008-2010 the largest proportion of changes from each of the interferons and natalizumab (NZ; Tysabri) were to GA, while those switching from GA were most often changed to IFNβ-1a SC. Those with a 2008 anchor prescription for NZ were most often changed (57%) to GA. In retrospective database analyses the presence of a claim for a filled prescription does not indicate that the drug was consumed, and reasons for changes in therapy are not available. The study design looking forward and backward from the anchor prescriptions may have contributed to differences in the proportion of patients seen with no observable change in DMT. Claims-based data are also constrained by coverage limitations that determine the data available and limit the generalizability of results to managed care patients. Changes in treatment patterns in the first half of the

  20. Computerized comprehensive data analysis of Lung Imaging Database Consortium (LIDC)

    International Nuclear Information System (INIS)

    Tan Jun; Pu Jiantao; Zheng Bin; Wang Xingwei; Leader, Joseph K.

    2010-01-01

    Purpose: Lung Image Database Consortium (LIDC) is the largest public CT image database of lung nodules. In this study, the authors present a comprehensive and the most updated analysis of this dynamically growing database under the help of a computerized tool, aiming to assist researchers to optimally use this database for lung cancer related investigations. Methods: The authors developed a computer scheme to automatically match the nodule outlines marked manually by radiologists on CT images. A large variety of characteristics regarding the annotated nodules in the database including volume, spiculation level, elongation, interobserver variability, as well as the intersection of delineated nodule voxels and overlapping ratio between the same nodules marked by different radiologists are automatically calculated and summarized. The scheme was applied to analyze all 157 examinations with complete annotation data currently available in LIDC dataset. Results: The scheme summarizes the statistical distributions of the abovementioned geometric and diagnosis features. Among the 391 nodules, (1) 365 (93.35%) have principal axis length ≤20 mm; (2) 120, 75, 76, and 120 were marked by one, two, three, and four radiologists, respectively; and (3) 122 (32.48%) have the maximum volume overlapping ratios ≥80% for the delineations of two radiologists, while 198 (50.64%) have the maximum volume overlapping ratios <60%. The results also showed that 72.89% of the nodules were assessed with malignancy score between 2 and 4, and only 7.93% of these nodules were considered as severely malignant (malignancy ≥4). Conclusions: This study demonstrates that LIDC contains examinations covering a diverse distribution of nodule characteristics and it can be a useful resource to assess the performance of the nodule detection and/or segmentation schemes.

  1. Cluster Analysis of the International Stellarator Confinement Database

    International Nuclear Information System (INIS)

    Kus, A.; Dinklage, A.; Preuss, R.; Ascasibar, E.; Harris, J. H.; Okamura, S.; Yamada, H.; Sano, F.; Stroth, U.; Talmadge, J.

    2008-01-01

    Heterogeneous structure of collected data is one of the problems that occur during derivation of scalings for energy confinement time, and whose analysis tourns out to be wide and complicated matter. The International Stellarator Confinement Database [1], shortly ISCDB, comprises in its latest version 21 a total of 3647 observations from 8 experimental devices, 2067 therefrom beeing so far completed for upcoming analyses. For confinement scaling studies 1933 observation were chosen as the standard dataset. Here we describe a statistical method of cluster analysis for identification of possible cohesive substructures in ISDCB and present some preliminary results

  2. Fallacies in English Department Students’ Claims: A Rhetorical Analysis of Critical Thinking

    OpenAIRE

    Rohmani Nur Indah; Agung Wiranata Kusuma

    2015-01-01

    Abstract: This study focuses on the fallacies found in English department students’ claims of fact, value and policy. It employs qualitative design as the object is the real reflection of critical thinking in the form of writing to understand the fallacies varieties.  The data are in the form of the sentences in the claims written by the students of UIN Maulana Malik Ibrahim Malang who took critical writing course. On claims of fact, the fallacies found include hasty generalization, irrelevan...

  3. Data analysis and pattern recognition in multiple databases

    CERN Document Server

    Adhikari, Animesh; Pedrycz, Witold

    2014-01-01

    Pattern recognition in data is a well known classical problem that falls under the ambit of data analysis. As we need to handle different data, the nature of patterns, their recognition and the types of data analyses are bound to change. Since the number of data collection channels increases in the recent time and becomes more diversified, many real-world data mining tasks can easily acquire multiple databases from various sources. In these cases, data mining becomes more challenging for several essential reasons. We may encounter sensitive data originating from different sources - those cannot be amalgamated. Even if we are allowed to place different data together, we are certainly not able to analyse them when local identities of patterns are required to be retained. Thus, pattern recognition in multiple databases gives rise to a suite of new, challenging problems different from those encountered before. Association rule mining, global pattern discovery, and mining patterns of select items provide different...

  4. Negligence claims following non-union and malunion of long bone fractures: An analysis of 15 years of data.

    Science.gov (United States)

    Metcalfe, C W; Harrison, W D; Nayagam, S; Narayan, B

    2016-10-01

    Non-unions and malunions are recognised to be complications of the treatment of long bone fractures. No previous work has looked at the implications of these complications from a medicolegal perspective. A complete database of litigation claims in Trauma and Orthopaedic Surgery was obtained from the NHS Litigation Authority. Two separate modalities of the treatment of long bone fractures were examined i) non-union and ii) acquired deformity. The type of complaint, whether defended or not, and costs were analysed. There were claims of which 97 related to non-union and 32 related to postoperative limb deformity. The total cost was £8.2 million over a 15-year period in England and Wales. Femoral and tibial non-unions were more expensive particularly if they resulted in amputation. Rotational deformity cost nearly twice as much as angulation deformities. The cosmetic appearances of rotational malalignment and amputation results in higher compensation; this reinforces an outward perception of outcome as being more important than harmful effects. Notwithstanding the limitations of this database, there are clinical lessons to be gained from these litigation claims. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  5. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  6. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    Science.gov (United States)

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  7. Analysis of Closed Claims in the Clinical Management of Rheumatoid Arthritis in Japan

    Directory of Open Access Journals (Sweden)

    Yasuhiro Otaki

    2017-01-01

    Conclusions: The characteristics of malpractice claims associated with RA management, including the high frequency of medication-related allegations, breakdowns in the assessment process, and high claim numbers among patients older than 60 years, suggest the importance of caution exercised by physicians when administering immunosuppressants for the clinical treatment of RA.

  8. A 12-year analysis of closed medical malpractice claims of the Taiwan civil court: A retrospective study.

    Science.gov (United States)

    Hwang, Chi-Yuan; Wu, Chien-Hung; Cheng, Fu-Cheng; Yen, Yung-Lin; Wu, Kuan-Han

    2018-03-01

    Malpractices lawsuits cause increased physician stress and decreased career satisfaction, which might result in defensive medicine for avoiding litigation. It is, consequently, important to learn experiences from previous malpractice claims. The aim of this study was to examine the epidemiologic factors related to medical malpractice claims, identify specialties at high risk of such claims, and determine clinical which errors tend to lead to medical malpractice lawsuits, by analyzing closed malpractice claims in the civil courts of Taiwan.The current analysis reviewed the verdicts of the Taiwan judicial system from a retrospective study using the population-based databank, focusing on 946 closed medical claims between 2002 and 2013.Among these medical malpractice claims, only 14.1% of the verdicts were against clinicians, with a mean indemnity payment of $83,350. The most common single specialty involved was obstetrics (10.7%), while the surgery group accounted for approximately 40% of the cases. In total, 46.3% of the patients named in the claims had either died or been gravely injured. Compared to the $75,632 indemnity for deceased patients, the mean indemnity payment for plaintiffs with grave outcomes was approximately 4.5 times higher. The diagnosis groups at high risk of malpractice litigation were infectious diseases (7.3%), malignancies (7.2%), and limb fractures (4.9%). A relatively low success rate was found in claims concerning undiagnosed congenital anomalies (4.5%) and infectious diseases (5.8%) group. A surgery dispute was the most frequent argument in civil malpractice claims (38.8%), followed by diagnosis error (19.3%).Clinicians represent 85.9% of the defendants who won their cases, but they spent an average of 4.7 years to reach final adjudication. Increased public education to prevent unrealistic expectations among patients is recommended to decrease frivolous lawsuits. Further investigation to improve the lengthy judicial process is also

  9. Claims in vapour device (e-cigarette) regulation: A Narrative Policy Framework analysis.

    Science.gov (United States)

    O'Leary, Renée; Borland, Ron; Stockwell, Tim; MacDonald, Marjorie

    2017-06-01

    The electronic cigarette or e-cigarette (vapour device) is a consumer product undergoing rapid growth, and governments have been adopting regulations on the sale of the devices and their nicotine liquids. Competing claims about vapour devices have ignited a contentious debate in the public health community. What claims have been taken up in the state arena, and how have they possibly influenced regulatory outcomes? This study utilized Narrative Policy Framework to analyze the claims made about vapour devices in legislation recommendation reports from Queensland Australia, Canada, and the European Union, and the 2016 deeming rule legislation from the United States, and examined the claims and the regulatory outcomes in these jurisdictions. The vast majority of claims in the policy documents represented vapour devices as a threat: an unsafe product harming the health of vapour device users, a gateway product promoting youth tobacco uptake, and a quasi-tobacco product impeding tobacco control. The opportunity for vapour devices to promote cessation or reduce exposure to toxins was very rarely presented, and these positive claims were not discussed at all in two of the four documents studied. The dominant claims of vapour devices as a public health threat have supported regulations that have limited their potential as a harm reduction strategy. Future policy debates should evaluate the opportunities for vapour devices to decrease the health and social burdens of the tobacco epidemic. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  11. Treatment Patterns, Complications, and Health Care Utilization Among Endometriosis Patients Undergoing a Laparoscopy or a Hysterectomy: A Retrospective Claims Analysis.

    Science.gov (United States)

    Surrey, Eric S; Soliman, Ahmed M; Yang, Hongbo; Du, Ella Xiaoyan; Su, Bowdoin

    2017-11-01

    Hysterectomy and laparoscopy are common surgical procedures used for the treatment of endometriosis. This study compares outcomes for women who received either procedure within the first year post initial surgery. The study used data from the Truven Health MarketScan claims databases from 2004 to 2013 to identify women aged 18-49 years who received an endometriosis-related laparoscopy or hysterectomy. Patients were excluded if they did not have continuous insurance coverage from 1 year before through 1 year after their endometriosis-related procedure, if they were diagnosed with uterine fibroids prior to or on the date of surgery (i.e., index date), or if they had a hysterectomy prior to the index date. The descriptive analyses examined differences between patients with an endometriosis-related laparoscopy or hysterectomy in regard to medications prescribed, complications, and hospitalizations during the immediate year post procedure. The final sample consisted of 24,915 women who underwent a hysterectomy and 37,308 who underwent a laparoscopy. Results revealed significant differences between the cohorts, with women who received a laparoscopy more likely to be prescribed a GnRH agonist, progestin, danazol, or an opioid analgesic in the immediate year post procedure compared to women who underwent a hysterectomy. In contrast, women who underwent a hysterectomy generally had higher complication rates. Index hospitalization rates and length of stay (LOS) were higher for women who had a hysterectomy, while post-index hospitalization rates and LOS were higher for women who had a laparoscopy. For both cohorts, post-procedure complications were associated with significantly higher hospitalization rates and longer LOS. This study indicated significantly different 1-year post-surgical outcomes for patients who underwent an endometriosis-related hysterectomy relative to a laparoscopy. Furthermore, the endometriosis patients in this analysis had a considerable risk of

  12. Injuries to New Zealanders participating in adventure tourism and adventure sports: an analysis of Accident Compensation Corporation (ACC) claims.

    Science.gov (United States)

    Bentley, Tim; Macky, Keith; Edwards, Jo

    2006-12-15

    The aim of this study was to examine the involvement of adventure tourism and adventure sports activity in injury claims made to the Accident Compensation Corporation (ACC). Epidemiological analysis of ACC claims for the period, July 2004 to June 2005, where adventure activities were involved in the injury. 18,697 adventure tourism and adventure sports injury claims were identified from the data, representing 28 activity sectors. Injuries were most common during the summer months, and were most frequently located in the major population centres. The majority of injuries were incurred by claimants in the 20-50 years age groups, although claimants over 50 years of age had highest claims costs. Males incurred 60% of all claims. Four activities (horse riding, mountain biking, tramping/hiking, and surfing) were responsible for approximately 60% of all adventure tourism and adventure sports-related injuries. Slips, trips, and falls were the most common injury initiating events, and injuries were most often to the back/spine, shoulder, and knee. These findings suggest the need to investigate whether regulatory intervention in the form of codes of practice for high injury count activities such as horse riding and mountain biking may be necessary. Health promotion messages and education programs should focus on these and other high-injury risk areas. Improved risk management practices are required for commercial adventure tourism and adventure sports operators in New Zealand if safety is to be improved across this sector.

  13. Initial Experience With Tofacitinib in Clinical Practice: Treatment Patterns and Costs of Tofacitinib Administered as Monotherapy or in Combination With Conventional Synthetic DMARDs in 2 US Health Care Claims Databases.

    Science.gov (United States)

    Harnett, James; Curtis, Jeffrey R; Gerber, Robert; Gruben, David; Koenig, Andrew

    2016-06-01

    Tofacitinib is an oral Janus kinase inhibitor indicated for the treatment of rheumatoid arthritis (RA). Tofacitinib can be administered as a monotherapy or in combination with conventional synthetic disease-modifying antirheumatic drugs (DMARDs). This study describes RA patients' characteristics, treatment patterns, and costs for those initiating tofacitinib treatment as monotherapy or combination therapy, using US claims data from clinical practice. A retrospective cohort analysis of patients aged ≥18 years with RA (International Classification of Diseases, Ninth Revision code 714.xx) and with ≥1 tofacitinib claim in the Truven Marketscan (TM) or the Optum Clinformatics (OC) database. Index was defined as the first tofacitinib fill date (November 2012-June 2014). Patients were continuously enrolled for ≥12 months before and after index. Adherence was assessed using the proportion of days covered (PDC) and medication possession ratio (MPR). Persistence was evaluated using a 1.5× days' supply gap or switch. All-cause and RA-related costs in the 12-month pre- and post-index periods were evaluated. Unadjusted and adjusted analyses were conducted on data on treatment patterns and costs stratified by monotherapy status. A total of 337 (TM) and 118 (OC) tofacitinib patients met the selection criteria; 52.2% (TM) and 50.8% (OC) received monotherapy and 83.7% (TM) and 76.3% (OC) had pre-index biologic DMARD experience. Twelve-month mean PDC values were 0.56 (TM) and 0.53 (OC), and 12-month mean MPR was 0.84 (TM) and 0.80 (OC), with persistence of 140.0 (TM) and 124.6 (OC) days. Between 12-month pre- and post-index periods, mean (SD) 12-month RA-related medical costs decreased by $5784 ($31,832) in TM and $6103 ($25,897) in OC (both, P tofacitinib knowledge base and will enable informed clinical and policy decision making based on valuable datasets independent of randomized controlled trials. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  14. Bioinformatics Database Tools in Analysis of Genetics of Neurodevelopmental Disorders

    Directory of Open Access Journals (Sweden)

    Dibyashree Mallik

    2017-10-01

    Full Text Available Bioinformatics tools are recently used in various sectors of biology. Many questions regarding Neurodevelopmental disorder which arises as a major health issue recently can be solved by using various bioinformatics databases. Schizophrenia is such a mental disorder which is now arises as a major threat in young age people because it is mostly seen in case of people during their late adolescence or early adulthood period. Databases like DISGENET, GWAS, PHARMGKB, and DRUGBANK have huge repository of genes associated with schizophrenia. We found a lot of genes are being associated with schizophrenia, but approximately 200 genes are found to be present in any of these databases. After further screening out process 20 genes are found to be highly associated with each other and are also a common genes in many other diseases also. It is also found that they all are serves as a common targeting gene in many antipsychotic drugs. After analysis of various biological properties, molecular function it is found that these 20 genes are mostly involved in biological regulation process and are having receptor activity. They are belonging mainly to receptor protein class. Among these 20 genes CYP2C9, CYP3A4, DRD2, HTR1A, HTR2A are shown to be a main targeting genes of most of the antipsychotic drugs and are associated with  more than 40% diseases. The basic findings of the present study enumerated that a suitable combined drug can be design by targeting these genes which can be used for the better treatment of schizophrenia.

  15. Statin use and risk of cholecystectomy - A case-control analysis using Swiss claims data.

    Science.gov (United States)

    Biétry, Fabienne A; Reich, Oliver; Schwenkglenks, Matthias; Meier, Christoph R

    2016-12-01

    Using claims data from the Helsana Group, a large Swiss health insurance provider, we examined the association between statin use and the risk of cholecystectomy in a case-control analysis. We identified 2,200 cholecystectomy cases between 2013 and 2014 and matched 4 controls to each case on age, sex, index date and canton. We categorized statin users into current or past users (last prescription ≤ 180 or > 180 days before the index date, respectively) and classified medication use by duration based on number of prescriptions before the index date. We applied conditional logistic regression analyses to calculate odds ratios (ORs) with 95% confidence intervals (CIs) and adjusted the analyses for history of cardiovascular diseases and for use of estrogens, fibrates and other lipid-lowering agents. The adjusted OR (aOR) for cholecystectomy was 0.85 (95% CI: 0.74, 0.99) for current statin users compared to non-users. Long-term current statin use (5-19 prescriptions) was associated with a reduced OR (aOR 0.77, 95% CI: 0.65, 0.92). However, neither short-term current use nor past statin use affected the risk of cholecystectomy. The study supports the previously raised hypothesis that long-term statin use reduces the risk of cholecystectomy.

  16. A Critical Analysis of Claims and Their Authenticity in Indian Drug Promotional Advertisements

    Directory of Open Access Journals (Sweden)

    Gurpreet Kaur Randhawa

    2015-01-01

    Full Text Available Introduction. Drug promotional advertisements (DPAs form a major marketing technique of pharmaceutical companies for promoting their products and disseminating ambiguous drug information which can affect prescribing pattern of physicians. Drug information includes product characteristics, various marketing claims with references in support to increase its credibility and authenticity. Material and Methods. An observational study was carried out on fifty printed drug advertisement brochures which were collected from different OPDs of Guru Nanak Dev Hospital attached to Government Medical College, Amritsar, India. These advertisements were analyzed and claims were categorized into true, false, exaggerated, vague, and controversial on criteria as reported by Rohraa et al. (2006. References of DPAs in support of the claims were critically analyzed for their retrievability from web and validity pertaining to claims. Results. Out of 209 claims from 50 advertisements, only 46% were found to be true, 21% false, 16% vague, 7% exaggerated, and 10% controversial in nature. Out of 160 references given in support of claims, 49 (30% of references were irretrievable. Out of 111 (70% retrievable references, 92 (83% references were found valid. Conclusion. Drug information provided in the DPAs was biased, incomplete, unauthentic, and unreliable with references exhibiting questionable credibility.

  17. BLM Colorado Mining Claims Closed

    Data.gov (United States)

    Department of the Interior — Shapefile Format –This data set consists of closed mining claim records extracted from BLM’s LR2000 database. These records contain case attributes as well as legal...

  18. BLM Colorado Mining Claims Active

    Data.gov (United States)

    Department of the Interior — Shapefile Format –This data set consists of active mining claim records extracted from BLM’s LR2000 database. These records contain case attributes as well as legal...

  19. The Database and Data Analysis Software of Radiation Monitoring System

    International Nuclear Information System (INIS)

    Wang Weizhen; Li Jianmin; Wang Xiaobing; Hua Zhengdong; Xu Xunjiang

    2009-01-01

    Shanghai Synchrotron Radiation Facility (SSRF for short) is a third-generation light source building in China, including a 150MeV injector, 3.5GeV booster, 3.5GeV storage ring and an amount of beam line stations. The data is fetched by the monitoring computer from collecting modules in the front end, and saved in the MySQL database in the managing computer. The data analysis software is coded with Python, a script language, to inquire, summarize and plot the data of a certain monitoring channel during a certain period and export to an external file. In addition, the warning event can be inquired separately. The website for historical and real-time data inquiry and plotting is coded with PHP. (authors)

  20. Occupational eye injury and risk reduction: Kentucky workers' compensation claim analysis 1994-2003.

    Science.gov (United States)

    McCall, B P; Horwitz, I B; Taylor, O A

    2009-06-01

    Occupational eye injuries are a significant source of injury in the workplace. Little population-based research in the area has been conducted, and is necessary for developing and prioritizing effective interventions. Workers' compensation data from the state of Kentucky for the years 1994-2003 were analysed by demographics, injury nature and cause, cost, and occupational and industrial characteristics. The US Bureau of Labor Statistics' Current Population Survey was utilised to compute injury rates for demographic and occupational groups. There were 10,545 claims of ocular injury, representing 6.29 claims per 10,000 workers on average annually. A substantial drop in the claim rate was found after the state passed monetary penalties for injuries caused by employer negligence or OSHA violations. Claims by men were over three times more likely than those by women to have associated claim costs (OR 0.52; 95% CI 0.32 to 0.85; p = 0.009). The highest eye injury rates per 10,000 of 13.46 (95% CI 12.86 to 14.07) were found for the helpers/labourers occupation, and of 19.95 (95% CI 18.73 to 21.17) for the construction industry. The total cost of claim payments over the period was over $3,480,000, and average cost per claim approximated $331. Eye injuries remain a significant risk to worker health, especially among men in jobs requiring intensive manual labour. Evidence showed that increased legislative regulation led to a decline in eye injuries, which was consistent with other recent findings in the area. Additionally, targeting groups most at risk, increasing worker training, providing effective eye protection equipment, and developing workplace safety cultures may together reduce occupational eye injuries.

  1. Analysis list - ChIP-Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...://ftp.biosciencedbc.jp/archive/chip-atlas/LATEST/chip_atlas_analysis_list.zip File size: 44.8 KB Simple sea...e class. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Analysis list - ChIP-Atlas | LSDB Archive ...

  2. Association of Attorney Advertising and FDA Action with Prescription Claims: A Time Series Segmented Regression Analysis.

    Science.gov (United States)

    Tippett, Elizabeth C; Chen, Brian K

    2015-12-01

    Attorneys sponsor television advertisements that include repeated warnings about adverse drug events to solicit consumers for lawsuits against drug manufacturers. The relationship between such advertising, safety actions by the US Food and Drug Administration (FDA), and healthcare use is unknown. To investigate the relationship between attorney advertising, FDA actions, and prescription drug claims. The study examined total users per month and prescription rates for seven drugs with substantial attorney advertising volume and FDA or other safety interventions during 2009. Segmented regression analysis was used to detect pre-intervention trends, post-intervention level changes, and changes in post-intervention trends relative to the pre-intervention trends in the use of these seven drugs, using advertising volume, media hits, and the number of Medicare enrollees as covariates. Data for these variables were obtained from the Center for Medicare and Medicaid Services, Kantar Media, and LexisNexis. Several types of safety actions were associated with reductions in drug users and/or prescription rates, particularly for fentanyl, varenicline, and paroxetine. In most cases, attorney advertising volume rose in conjunction with major safety actions. Attorney advertising volume was positively correlated with prescription rates in five of seven drugs, likely because advertising volume began rising before safety actions, when prescription rates were still increasing. On the other hand, attorney advertising had mixed associations with the number of users per month. Regulatory and safety actions likely reduced the number of users and/or prescription rates for some drugs. Attorneys may have strategically chosen to begin advertising adverse drug events prior to major safety actions, but we found little evidence that attorney advertising reduced drug use. Further research is needed to better understand how consumers and physicians respond to attorney advertising.

  3. Predicting Consumer Effort in Finding and Paying for Health Care: Expert Interviews and Claims Data Analysis.

    Science.gov (United States)

    Long, Sandra; Monsen, Karen A; Pieczkiewicz, David; Wolfson, Julian; Khairat, Saif

    2017-10-12

    For consumers to accept and use a health care information system, it must be easy to use, and the consumer must perceive it as being free from effort. Finding health care providers and paying for care are tasks that must be done to access treatment. These tasks require effort on the part of the consumer and can be frustrating when the goal of the consumer is primarily to receive treatments for better health. The aim of this study was to determine the factors that result in consumer effort when finding accessible health care. Having an understanding of these factors will help define requirements when designing health information systems. A panel of 12 subject matter experts was consulted and the data from 60 million medical claims were used to determine the factors contributing to effort. Approximately 60 million claims were processed by the health care insurance organization in a 12-month duration with the population defined. Over 292 million diagnoses from claims were used to validate the panel input. The results of the study showed that the number of people in the consumer's household, number of visits to providers outside the consumer's insurance network, number of adjusted and denied medical claims, and number of consumer inquiries are a proxy for the level of effort in finding and paying for care. The effort level, so measured and weighted per expert panel recommendations, differed by diagnosis. This study provides an understanding of how consumers must put forth effort when engaging with a health care system to access care. For higher satisfaction and acceptance results, health care payers ideally will design and develop systems that facilitate an understanding of how to avoid denied claims, educate on the payment of claims to avoid adjustments, and quickly find providers of affordable care. ©Sandra Long, Karen A. Monsen, David Pieczkiewicz, Julian Wolfson, Saif Khairat. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 12.10.2017.

  4. DRES Database of Methods for the Analysis of Chemical Warfare Agents

    National Research Council Canada - National Science Library

    D'Agostino, Paul

    1997-01-01

    .... Update of the database continues as an ongoing effort and the DRES Database of Methods for the Analysis of Chemical Warfare Agents is available panel in hardcopy form or as a softcopy Procite or Wordperfect file...

  5. Adherence to tobramycin inhaled powder vs inhaled solution in patients with cystic fibrosis: analysis of US insurance claims data

    Directory of Open Access Journals (Sweden)

    Hamed K

    2017-04-01

    Full Text Available Kamal Hamed,1 Valentino Conti,2 Hengfeng Tian,1 Emil Loefroth3 1Novartis Pharmaceuticals Corporation, East Hanover, NJ, USA; 2Novartis Global Service Center, Dublin, Ireland; 3Novartis Sverige AB, Täby, Sweden Purpose: Tobramycin inhalation powder (TIP, the first dry-powder inhaled antibiotic for pulmonary Pseudomonas aeruginosa infection, is associated with reduced treatment burden, increased patient satisfaction, and higher self-reported adherence for cystic fibrosis (CF patients. We compared adherence in CF patients newly treated with TIP with those newly treated with the traditional tobramycin inhalation solution (TIS, using US insurance claims data.Patients and methods: From the Truven MarketScan® database, we identified CF patients chronically infected with P. aeruginosa who had been prescribed TIP between May 1, 2013 to December 31, 2014, or TIS between September 1, 2010 to April 30, 2012 with at least 12 months of continuous medical and pharmacy benefits prior to and following prescription. TIP and TIS adherence levels were assessed.Results: A total of 145 eligible patients were identified for the TIP cohort and 306 for the TIS cohort. Significant differences in age distribution (25.0 vs 21.9 years for TIP vs TIS, respectively, P=0.017, type of health plan (P=0.014, employment status (72.4% vs 63.4% of TIP vs TIS patients in full-time employment, P=0.008, and some comorbidities were observed between the two cohorts. Although a univariate analysis found no significant differences between TIP and TIS (odds ratio [OR] 1.411, 95% confidence interval [CI] 0.949–2.098, TIP was moderately associated with higher adherence levels compared with TIS in a multivariable analysis, once various demographic and clinical characteristics were adjusted for. These included geographic location (OR: 1.566, CI: 1.016–2.413 and certain comorbidities.Conclusion: This study of US patient data supports previous findings that TIP is associated with better

  6. PATRIC, the bacterial bioinformatics database and analysis resource

    Science.gov (United States)

    Wattam, Alice R.; Abraham, David; Dalay, Oral; Disz, Terry L.; Driscoll, Timothy; Gabbard, Joseph L.; Gillespie, Joseph J.; Gough, Roger; Hix, Deborah; Kenyon, Ronald; Machi, Dustin; Mao, Chunhong; Nordberg, Eric K.; Olson, Robert; Overbeek, Ross; Pusch, Gordon D.; Shukla, Maulik; Schulman, Julie; Stevens, Rick L.; Sullivan, Daniel E.; Vonstein, Veronika; Warren, Andrew; Will, Rebecca; Wilson, Meredith J.C.; Yoo, Hyun Seung; Zhang, Chengdong; Zhang, Yan; Sobral, Bruno W.

    2014-01-01

    The Pathosystems Resource Integration Center (PATRIC) is the all-bacterial Bioinformatics Resource Center (BRC) (http://www.patricbrc.org). A joint effort by two of the original National Institute of Allergy and Infectious Diseases-funded BRCs, PATRIC provides researchers with an online resource that stores and integrates a variety of data types [e.g. genomics, transcriptomics, protein–protein interactions (PPIs), three-dimensional protein structures and sequence typing data] and associated metadata. Datatypes are summarized for individual genomes and across taxonomic levels. All genomes in PATRIC, currently more than 10 000, are consistently annotated using RAST, the Rapid Annotations using Subsystems Technology. Summaries of different data types are also provided for individual genes, where comparisons of different annotations are available, and also include available transcriptomic data. PATRIC provides a variety of ways for researchers to find data of interest and a private workspace where they can store both genomic and gene associations, and their own private data. Both private and public data can be analyzed together using a suite of tools to perform comparative genomic or transcriptomic analysis. PATRIC also includes integrated information related to disease and PPIs. All the data and integrated analysis and visualization tools are freely available. This manuscript describes updates to the PATRIC since its initial report in the 2007 NAR Database Issue. PMID:24225323

  7. PATRIC, the bacterial bioinformatics database and analysis resource.

    Science.gov (United States)

    Wattam, Alice R; Abraham, David; Dalay, Oral; Disz, Terry L; Driscoll, Timothy; Gabbard, Joseph L; Gillespie, Joseph J; Gough, Roger; Hix, Deborah; Kenyon, Ronald; Machi, Dustin; Mao, Chunhong; Nordberg, Eric K; Olson, Robert; Overbeek, Ross; Pusch, Gordon D; Shukla, Maulik; Schulman, Julie; Stevens, Rick L; Sullivan, Daniel E; Vonstein, Veronika; Warren, Andrew; Will, Rebecca; Wilson, Meredith J C; Yoo, Hyun Seung; Zhang, Chengdong; Zhang, Yan; Sobral, Bruno W

    2014-01-01

    The Pathosystems Resource Integration Center (PATRIC) is the all-bacterial Bioinformatics Resource Center (BRC) (http://www.patricbrc.org). A joint effort by two of the original National Institute of Allergy and Infectious Diseases-funded BRCs, PATRIC provides researchers with an online resource that stores and integrates a variety of data types [e.g. genomics, transcriptomics, protein-protein interactions (PPIs), three-dimensional protein structures and sequence typing data] and associated metadata. Datatypes are summarized for individual genomes and across taxonomic levels. All genomes in PATRIC, currently more than 10,000, are consistently annotated using RAST, the Rapid Annotations using Subsystems Technology. Summaries of different data types are also provided for individual genes, where comparisons of different annotations are available, and also include available transcriptomic data. PATRIC provides a variety of ways for researchers to find data of interest and a private workspace where they can store both genomic and gene associations, and their own private data. Both private and public data can be analyzed together using a suite of tools to perform comparative genomic or transcriptomic analysis. PATRIC also includes integrated information related to disease and PPIs. All the data and integrated analysis and visualization tools are freely available. This manuscript describes updates to the PATRIC since its initial report in the 2007 NAR Database Issue.

  8. Development of Database for Accident Analysis in Indian Mines

    Science.gov (United States)

    Tripathy, Debi Prasad; Guru Raghavendra Reddy, K.

    2016-10-01

    Mining is a hazardous industry and high accident rates associated with underground mining is a cause of deep concern. Technological developments notwithstanding, rate of fatal accidents and reportable incidents have not shown corresponding levels of decline. This paper argues that adoption of appropriate safety standards by both mine management and the government may result in appreciable reduction in accident frequency. This can be achieved by using the technology in improving the working conditions, sensitising workers and managers about causes and prevention of accidents. Inputs required for a detailed analysis of an accident include information on location, time, type, cost of accident, victim, nature of injury, personal and environmental factors etc. Such information can be generated from data available in the standard coded accident report form. This paper presents a web based application for accident analysis in Indian mines during 2001-2013. An accident database (SafeStat) prototype based on Intranet of the TCP/IP agreement, as developed by the authors, is also discussed.

  9. Pension regulation and the market value of pension liabilities - A contingent claims analysis using Parisian options

    NARCIS (Netherlands)

    Broeders, D.; Chen, A.

    2008-01-01

    We analyze the market-consistent valuation of pension liabilities in a contingent claim framework whereby a knock-out barrier feature is applied to capture early regulatory closure of a pension plan. We investigate two cases which we call "immediate closure procedure" and "delayed closure

  10. Analysis of audiometric database shows evidence of employee fraud

    Science.gov (United States)

    Erdreich, John

    2003-10-01

    Following a lengthy strike, several hundred delivery drivers filed workers compensation claims for occupational hearing loss. We were asked to evaluate the noise exposure of the drivers during their in-plant tasks. In-plant exposures were not predictive of any hearing loss. A comparison of audiometric data for the claimants revealed consistent hearing loss independent of duration of employment or age. These discrepancies between observations and common understanding of dose-response relationships between noise exposure and hearing loss led to further investigation, ultimately resulting in the dismissal of all claims against the employer who then filed an action against the claimant's attorneys and physician under the Racketeering in Corrupt Organizations Act (RICO). The details of the legal complaint, which reads like a detective novel, can be found at the United States District Court for the Southern District of New York [93 Civ. 7222 (LAP)].

  11. Content analysis of e-cigarette products, promotions, prices and claims on Internet tobacco vendor websites, 2013-2014.

    Science.gov (United States)

    Williams, Rebecca S; Derrick, Jason; Liebman, Aliza K; LaFleur, Kevin

    2017-11-03

    To identify the population of Internet e-cigarette vendors (IEVs) and conduct content analysis of products sold and IEVs' promotional, claims and pricing practices. Multiple sources were used to identify IEV websites, primarily complex search algorithms scanning over 180 million websites. In 2013, 32 446 websites were manually screened, identifying 980 IEVs, with the 281 most popular selected for content analysis. This methodology yielded 31 239 websites for manual screening in 2014, identifying 3096 IEVs, with 283 selected for content analysis. While the majority of IEVs (71.9%) were US based in 2013, this dropped to 64.3% in 2014 (plocated in at least 38 countries, and 12% providing location indicators reflecting two or more countries, complicating jurisdictional determinations.Reflecting the retail market, IEVs are transitioning from offering disposable and 'cigalike' e-cigarettes to larger tank and "mod" systems. Flavored e-cigarettes were available from 85.9% of IEVs in 2014, with fruit and candy flavors being most popular. Most vendors (76.5%) made health claims in 2013, dropping to 43.1% in 2014. Some IEVs featured conflicting claims about whether or not e-cigarettes aid in smoking cessation. There was wide variation in pricing, with e-cigarettes available as inexpensive as one dollar, well within the affordable range for adults and teens. The number of Internet e-cigarette vendors grew threefold from 2013 to 2014, far surpassing the number of Internet cigarette vendors (N=775) at the 2004 height of that industry. New and expanded regulations for online e-cigarette sales are needed, including restrictions on flavors and marketing claims. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. Interaction between policy measures. Analysis tool in the MURE database

    Energy Technology Data Exchange (ETDEWEB)

    Boonekamp, P.G.M. [ECN Policy Studies, Petten (Netherlands); Faberi, S. [Institute of Studies for the Integration of Systems ISIS, Rome (Italy)

    2013-12-15

    The ODYSSEE database on energy efficiency indicators (www.odyssee-indicators.org) has been set up to enable the monitoring and evaluation of realised energy efficiency improvements and related energy savings. The database covers the 27 EU countries as well as Norway and Croatia and data are available from 1990 on. This report describes how sets of mutually consistent impacts for packages as well as individual policy measures can be determined in the MURE database (MURE is the French abbreviation for Mesures d'Utilisation Rationnelle de l'Energie)

  13. Performance analysis of different database in new internet mapping system

    Science.gov (United States)

    Yao, Xing; Su, Wei; Gao, Shuai

    2017-03-01

    In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.

  14. Effects of Nursing Home Residency on Diabetes Care in Individuals with Dementia: An Explorative Analysis Based on German Claims Data

    Directory of Open Access Journals (Sweden)

    Larissa Schwarzkopf

    2017-02-01

    Full Text Available Aims: This claims data-based study compares the intensity of diabetes care in community dwellers and nursing home residents with dementia. Methods: Delivery of diabetes-related medical examinations (DRMEs was compared via logistic regression in 1,604 community dwellers and 1,010 nursing home residents with dementia. The intra-individual effect of nursing home transfer was evaluated within mixed models. Results: Delivery of DRMEs decreases with increasing care dependency, with more community-living individuals receiving DRMEs. Moreover, DRME provision decreases after nursing home transfer. Conclusion: Dementia patients receive fewer DRMEs than recommended, especially in cases of higher care dependency and particularly in nursing homes. This suggests lacking awareness regarding the specific challenges of combined diabetes and dementia care.

  15. Good for your health? An analysis of the requirements for scientific substantiation in European health claims regulation

    Directory of Open Access Journals (Sweden)

    Oliver Todt

    2016-05-01

    Full Text Available Objective. To identify the various types of evidence, as well as their relative importance in European health claims regulation, in order to analyze the consequences for consumer protection of the requirements for scientific substantiation in this regulation. Materials and methods. Qualitative analysis of various documents relevant to the regulatory process, particularly as to the implications of the standards of proof for the functional food market, as well as consumer behavior. Results. European regulation defines a hierarchy of evidence that turns randomized controlled trials into a necessary and sufficient condition for health claim autho- rizations. Conclusions. Consumer protection can be interpreted in different manners. High standards of proof protect consumers from false information about the health outcomes of functional foods, while lower standards lead to more, albeit less accurate information about such outcomes being available to consumers.

  16. A Data Analysis Expert System For Large Established Distributed Databases

    Science.gov (United States)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  17. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  18. Constitution of an incident database suited to statistical analysis and examples

    International Nuclear Information System (INIS)

    Verpeaux, J.L.

    1990-01-01

    The Nuclear Protection and Safety Institute (IPSN) has set up and is developing an incidents database, which is used for the management and analysis of incidents encountered in French PWR plants. IPSN has already carried out several incidents or safety important events statistical analysis, and is improving its database on the basis of the experience it gained from this various studies. A description of the analysis method and of the developed database is presented

  19. What does the U.S. Medicare administrative claims database tell us about initial antiepileptic drug treatment for older adults with new-onset epilepsy?

    Science.gov (United States)

    Martin, Roy C; Faught, Edward; Szaflarski, Jerzy P; Richman, Joshua; Funkhouser, Ellen; Piper, Kendra; Juarez, Lucia; Dai, Chen; Pisu, Maria

    2017-04-01

    Disparities in epilepsy treatment are not uncommon; therefore, we examined population-based estimates of initial antiepileptic drugs (AEDs) in new-onset epilepsy among racial/ethnic minority groups of older US Medicare beneficiaries. We conducted retrospective analyses of 2008-2010 Medicare administrative claims for a 5% random sample of beneficiaries augmented for minority representation. New-onset epilepsy cases in 2009 had ≥1 International Classification of Diseases, Ninth Revision (ICD-9) 345.x or ≥2 ICD-9 780.3x, and ≥1 AED, AND no seizure/epilepsy claim codes or AEDs in preceding 365 days. We examined AED use and concordance with Quality Indicators of Epilepsy Treatment (QUIET) 6 (monotherapy as initial treatment = ≥30 day first prescription with no other concomitant AEDs), and prompt AED treatment (first AED within 30 days of diagnosis). Logistic regression examined likelihood of prompt treatment by demographic (race/ethnicity, gender, age), clinical (number of comorbid conditions, neurology care, index event occurring in the emergency room (ER)), and economic (Part D coverage phase, eligibility for Part D Low Income Subsidy [LIS], and ZIP code level poverty) factors. Over 1 year of follow-up, 79.6% of 3,706 new epilepsy cases had one AED only (77.89% of whites vs. 89% of American Indian/Alaska Native [AI/AN]). Levetiracetam was the most commonly prescribed AED (45.5%: from 24.6% AI/AN to 55.0% whites). The second most common was phenytoin (30.6%: from 18.8% Asians to 43.1% AI/AN). QUIET 6 concordance was 94.7% (93.9% for whites to 97.3% of AI/AN). Only 50% received prompt AED therapy (49.6% whites to 53.9% AI/AN). Race/ethnicity was not significantly associated with AED patterns, monotherapy use, or prompt treatment. Monotherapy is common across all racial/ethnic groups of older adults with new-onset epilepsy, older AEDs are commonly prescribed, and treatment is frequently delayed. Further studies on reasons for treatment delays are warranted

  20. A large pharmacy claims-based descriptive analysis of patients with migraine and associated pharmacologic treatment patterns

    Directory of Open Access Journals (Sweden)

    Muzina DJ

    2011-11-01

    Full Text Available David J Muzina, William Chen, Steven J BowlinMedco Health Solutions Inc and Medco Research Institute, LLC, Franklin Lakes, NJ, USAPurpose: To investigate drug use, prescribing patterns, and comorbidities among patients with migraine in a large pharmacy claims database.Methods: 104,625 migraine subjects (identified according to the criteria in the International Classification of Diseases, Ninth Revision [ICD-9] for migraine or migraine-specific acute medication use and an equal number of control patients were selected from a de-identified claims database; the prevalence of patients with migraine-specific claims was determined. Patient demographics, migraine-related medication use, other psychotropic medication use, and comorbidities over a 12-month period were compared between the migraine population and the control group and between migraine subgroups.Results: Of the study population, 3.5% had a migraine diagnosis according to the ICD-9 or received a migraine-specific acute medication. Compared with controls, migraine patients had significantly greater disease comorbidity and higher use of prescription nonsteroidal anti-inflammatory drugs and controlled painkillers; they were also more likely to receive medications used to prevent migraines and other nonmigraine psychotropic medications, such as anxiolytics and hypnotics. Among migraine patients, 66% received acute migraine-specific medication while only 20% received US Food and Drug Administration–approved migraine preventive therapy. Notably, one-third of high triptan users did not receive any kind of preventive medication. Multiple medical and psychiatric comorbidities were observed at higher rates among migraine sufferers. In addition to significantly higher utilization of antidepressants compared with controls, migraine patients also received significantly more other psychotropic drugs by a factor of 2:1.Conclusion: Acute migraine medications are commonly used and frequently dispensed at

  1. Healthfulness and nutritional composition of Canadian prepackaged foods with and without sugar claims.

    Science.gov (United States)

    Bernstein, Jodi T; Franco-Arellano, Beatriz; Schermel, Alyssa; Labonté, Marie-Ève; L'Abbé, Mary R

    2017-11-01

    The objective of this study was to evaluate differences in calories, nutrient content, overall healthfulness, and use of sweetener ingredients between products with and without sugar claims. Consumers assume products with sugar claims are healthier and lower in calories. It is therefore important claims be found on comparatively healthier items. This study is a cross-sectional analysis of the University of Toronto's 2013 Food Label Database. Subcategories where at least 5% of products (and n ≥ 5) carried a sugar claim were included (n = 3048). Differences in median calorie content, nutrient content, and overall healthfulness, using the Food Standards Australia/New Zealand Nutrient Profiling Scoring criterion, between products with and without sugar claims, were determined. Proportion of products with and without claims that had excess free sugar levels (≥10% of calories from free sugar) and that contained sweeteners was also determined. Almost half (48%) of products with sugar claims contained excess free sugar, and a greater proportion contained sweeteners than products without such claims (30% vs 5%, χ 2 = 338.6, p contents than products without claims. At the subcategory level, reductions in free sugar contents were not always met with similar reductions in calorie contents. This study highlights concerns with regards to the nutritional composition of products bearing sugar claims. Findings can support educational messaging to assist consumer interpretation of sugar claims and can inform changes in nutrition policies, for example, permitting sugar claims only on products with calorie reductions and without excess free sugar.

  2. A Serial Analysis of Gene Expression (SAGE) database analysis of chemosensitivity

    DEFF Research Database (Denmark)

    Stein, Wilfred D; Litman, Thomas; Fojo, Tito

    2004-01-01

    are their corresponding solid tumors. We used the Serial Analysis of Gene Expression (SAGE) database to identify differences between solid tumors and cell lines, hoping to detect genes that could potentially explain differences in drug sensitivity. SAGE libraries were available for both solid tumors and cell lines from...

  3. Efficient hemodynamic event detection utilizing relational databases and wavelet analysis

    Science.gov (United States)

    Saeed, M.; Mark, R. G.

    2001-01-01

    Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.

  4. Etiology of work-related electrical injuries: a narrative analysis of workers' compensation claims.

    Science.gov (United States)

    Lombardi, David A; Matz, Simon; Brennan, Melanye J; Smith, Gordon S; Courtney, Theodore K

    2009-10-01

    The purpose of this study was to provide new insight into the etiology of primarily nonfatal, work-related electrical injuries. We developed a multistage, case-selection algorithm to identify electrical-related injuries from workers' compensation claims and a customized coding taxonomy to identify pre-injury circumstances. Workers' compensation claims routinely collected over a 1-year period from a large U.S. insurance provider were used to identify electrical-related injuries using an algorithm that evaluated: coded injury cause information, nature of injury, "accident" description, and injury description narratives. Concurrently, a customized coding taxonomy for these narratives was developed to abstract the activity, source, initiating process, mechanism, vector, and voltage. Among the 586,567 reported claims during 2002, electrical-related injuries accounted for 1283 (0.22%) of nonfatal claims and 15 fatalities (1.2% of electrical). Most (72.3%) were male, average age of 36, working in services (33.4%), manufacturing (24.7%), retail trade (17.3%), and construction (7.2%). Body part(s) injured most often were the hands, fingers, or wrist (34.9%); multiple body parts/systems (25.0%); lower/upper arm; elbow; shoulder, and upper extremities (19.2%). The leading activities were conducting manual tasks (55.1%); working with machinery, appliances, or equipment; working with electrical wire; and operating powered or nonpowered hand tools. Primary injury sources were appliances and office equipment (24.4%); wires, cables/cords (18.0%); machines and other equipment (11.8%); fixtures, bulbs, and switches (10.4%); and lightning (4.3%). No vector was identified in 85% of cases. and the work process was initiated by others in less than 1% of cases. Injury narratives provide valuable information to overcome some of the limitations of precoded data, more specially for identifying additional injury cases and in supplementing traditional epidemiologic data for further

  5. Evaluating parallel relational databases for medical data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; Wilson, Andrew T.

    2012-03-01

    Hospitals have always generated and consumed large amounts of data concerning patients, treatment and outcomes. As computers and networks have permeated the hospital environment it has become feasible to collect and organize all of this data. This raises naturally the question of how to deal with the resulting mountain of information. In this report we detail a proof-of-concept test using two commercially available parallel database systems to analyze a set of real, de-identified medical records. We examine database scalability as data sizes increase as well as responsiveness under load from multiple users.

  6. Longitudinal analysis of meta-analysis literatures in the database of ISI Web of Science.

    Science.gov (United States)

    Zhu, Changtai; Jiang, Ting; Cao, Hao; Sun, Wenguang; Chen, Zhong; Liu, Jinming

    2015-01-01

    The meta-analysis is regarded as an important evidence for making scientific decision. The database of ISI Web of Science collected a great number of high quality literatures including meta-analysis literatures. However, it is significant to understand the general characteristics of meta-analysis literatures to outline the perspective of meta-analysis. In this present study, we summarized and clarified some features on these literatures in the database of ISI Web of Science. We retrieved the meta-analysis literatures in the database of ISI Web of Science including SCI-E, SSCI, A&HCI, CPCI-S, CPCI-SSH, CCR-E, and IC. The annual growth rate, literature category, language, funding, index citation, agencies and countries/territories of the meta-analysis literatures were analyzed, respectively. A total of 95,719 records, which account for 0.38% (99% CI: 0.38%-0.39%) of all literatures, were found in the database. From 1997 to 2012, the annual growth rate of meta-analysis literatures was 18.18%. The literatures involved in many categories, languages, fundings, citations, publication agencies, and countries/territories. Interestingly, the index citation frequencies of the meta-analysis were significantly higher than that of other type literatures such as multi-centre study, randomize controlled trial, cohort study, case control study, and cases report (Panalysis has been becoming more and more prominent in recent years. In future, in order to promote the validity of meta-analysis, the CONSORT and PRISMA standard should be continuously popularized in the field of evidence-based medicine.

  7. The prevalence and ingredient cost of chronic comorbidity in the Irish elderly population with medication treated type 2 diabetes: A retrospective cross-sectional study using a national pharmacy claims database

    Directory of Open Access Journals (Sweden)

    O’Shea Miriam

    2013-01-01

    Full Text Available Abstract Background Comorbidity in patients with diabetes is associated with poorer health and increased cost. The aim of this study was to investigate the prevalence and ingredient cost of comorbidity in patients ≥ 65 years with and without medication treated type 2 diabetes using a national pharmacy claims database. Methods The Irish Health Service Executive Primary Care Reimbursement Service pharmacy claims database, which includes all prescribing to individuals covered by the General Medical Services scheme, was used to identify the study population (≥ 65 years. Patients with medication treated type 2 diabetes (T2DM were identified using the prescription of oral anti-hyperglycaemic agents alone or in combination with insulin as a proxy for disease diagnosis. The prevalence and ingredient prescribing cost of treated chronic comorbidity in the study population with and without medication treated T2DM were ascertained using a modified version of the RxRiskV index, a prescription based comorbidity index. The association between T2DM and comorbid conditions was assessed using logistic regression adjusting for age and sex. Bootstrapping was used to ascertain the mean annual ingredient cost of treated comorbidity. Statistical significance at p  Results In 2010, 43165 of 445180 GMS eligible individuals (9.7% were identified as having received medication for T2DM. The median number of comorbid conditions was significantly higher in those with T2DM compared to without (median 5 vs. 3 respectively; p  Conclusions Individuals with T2DM were more likely to have a higher number of treated comorbid conditions than those without and this was associated with higher ingredient costs. This has important policy and economic consequences for the planning and provision of future health services in Ireland, given the expected increase in T2DM and other chronic conditions.

  8. Clinical outcomes in low risk coronary artery disease patients treated with different limus-based drug-eluting stents--a nationwide retrospective cohort study using insurance claims database.

    Directory of Open Access Journals (Sweden)

    Chao-Lun Lai

    Full Text Available The clinical outcomes of different limus-based drug-eluting stents (DES in a real-world setting have not been well defined. The aim of this study was to investigate the clinical outcomes of three different limus-based DES, namely sirolimus-eluting stent (SES, Endeavor zotarolimus-eluting stent (E-ZES and everolimus-eluting stent (EES, using a national insurance claims database. We identified all patients who received implantation of single SES, E-ZES or EES between January 1, 2007 and December 31, 2009 from the National Health Insurance claims database, Taiwan. Follow-up was through December 31, 2011 for all selected clinical outcomes. The primary end-point was all-cause mortality. Secondary end-points included acute coronary events, heart failure needing hospitalization, and cerebrovascular disease. Cox regression model adjusting for baseline characteristics was used to compare the relative risks of different outcomes among the three different limus-based DES. Totally, 6584 patients were evaluated (n=2142 for SES, n=3445 for E-ZES, and n=997 for EES. After adjusting for baseline characteristics, we found no statistically significant difference in the risk of all-cause mortality in three DES groups (adjusted hazard ratio [HR]: 1.14, 95% confidence interval [CI]: 0.94-1.38, p=0.20 in E-ZES group compared with SES group; adjusted HR: 0.77, 95% CI: 0.54-1.10, p=0.15 in EES group compared with SES group. Similarly, we found no difference in the three stent groups in risks of acute coronary events, heart failure needing hospitalization, and cerebrovascular disease. In conclusion, we observed no difference in all-cause mortality, acute coronary events, heart failure needing hospitalization, and cerebrovascular disease in patients treated with SES, E-ZES, and EES in a real-world population-based setting in Taiwan.

  9. Mortality and comorbidities in patients with multiple sclerosis compared with a population without multiple sclerosis: An observational study using the US Department of Defense administrative claims database.

    Science.gov (United States)

    Capkun, Gorana; Dahlke, Frank; Lahoz, Raquel; Nordstrom, Beth; Tilson, Hugh H; Cutter, Gary; Bischof, Dorina; Moore, Alan; Simeone, Jason; Fraeman, Kathy; Bancken, Fabrice; Geissbühler, Yvonne; Wagner, Michael; Cohan, Stanley

    2015-11-01

    Data are limited for mortality and comorbidities in patients with multiple sclerosis (MS). Compare mortality rates and event rates for comorbidities in MS (n=15,684) and non-MS (n=78,420) cohorts from the US Department of Defense (DoD) database. Comorbidities and all-cause mortality were assessed using the database. Causes of death (CoDs) were assessed through linkage with the National Death Index. Cohorts were compared using mortality (MRR) and event (ERR) rate ratios. All-cause mortality was 2.9-fold higher in the MS versus non-MS cohort (MRR, 95% confidence interval [CI]: 2.9, 2.7-3.2). Frequent CoDs in the MS versus non-MS cohort were infectious diseases (6.2, 4.2-9.4), diseases of the nervous (5.8, 3.7-9.0), respiratory (5.0, 3.9-6.4) and circulatory (2.1, 1.7-2.7) systems and suicide (2.6, 1.3-5.2). Comorbidities including sepsis (ERR, 95% CI: 5.7, 5.1-6.3), ischemic stroke (3.8, 3.5-4.2), attempted suicide (2.4, 1.3-4.5) and ulcerative colitis (2.0, 1.7-2.3), were higher in the MS versus non-MS cohort. The rate of cancers was also higher in the MS versus the non-MS cohort, including lymphoproliferative disorders (2.2, 1.9-2.6) and melanoma (1.7, 1.4-2.0). Rates of mortality and several comorbidities are higher in the MS versus non-MS cohort. Early recognition and management of comorbidities may reduce premature mortality and improve quality of life in patients with MS. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Maternal deaths databases analysis: Ecuador 2003-2013

    Directory of Open Access Journals (Sweden)

    Antonio Pino

    2016-08-01

    Full Text Available Background: Maternal mortality ratio in Ecuador is the only millennium goal on which national agencies are still making strong efforts to reach 2015 target. The purpose of the study was to process national maternal death databases to identify a specific association pattern of variable included in the death certificate. Design and methods: The study processed mortality databases published yearly by the National Census and Statistics Institute (INEC. Data analysed were exclusively maternal deaths. Data corresponds to the 2003-2013 period, accessible through INEC’s website. Comparisons are based on number of deaths and use an ecological approach for geographical coincidences. Results: The study identified variable association into the maternal mortality national databases showing that to die at home or in a different place than a hospital is closely related to women’s socioeconomic characteristics; there was an association with the absence of a public health facility. Also, to die in a different place than the usual residence could mean that women and families are searching for or were referred to a higher level of attention when they face complications. Conclusions: Ecuadorian maternal deaths showed Patterns of inequity in health status, health care provision and health risks. A predominant factor seems unclear to explain the variable association found processing national databases; perhaps every pattern of health systems development played a role in maternal mortality or factors different from those registered by the statistics system may remain hidden. Some random influences might not be even considered in an explanatory model yet.

  11. Analysis of a virtual memory model for maintaining database views

    Science.gov (United States)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  12. Analysis of a Bibliographic Database Enhanced with a Library Classification.

    Science.gov (United States)

    Drabenstott, Karen Markey; And Others

    1990-01-01

    Describes a project that examined the effects of incorporating subject terms from the Dewey Decimal Classification (DDC) into a bibliographic database. It is concluded that the incorporation of DDC and possibly other library classifications into online catalogs can enhance subject access and provide additional subject searching strategies. (11…

  13. A Quantitative Analysis of the Extrinsic and Intrinsic Turnover Factors of Relational Database Support Professionals

    Science.gov (United States)

    Takusi, Gabriel Samuto

    2010-01-01

    This quantitative analysis explored the intrinsic and extrinsic turnover factors of relational database support specialists. Two hundred and nine relational database support specialists were surveyed for this research. The research was conducted based on Hackman and Oldham's (1980) Job Diagnostic Survey. Regression analysis and a univariate ANOVA…

  14. Medication errors: an analysis comparing PHICO's closed claims data and PHICO's Event Reporting Trending System (PERTS).

    Science.gov (United States)

    Benjamin, David M; Pendrak, Robert F

    2003-07-01

    Clinical pharmacologists are all dedicated to improving the use of medications and decreasing medication errors and adverse drug reactions. However, quality improvement requires that some significant parameters of quality be categorized, measured, and tracked to provide benchmarks to which future data (performance) can be compared. One of the best ways to accumulate data on medication errors and adverse drug reactions is to look at medical malpractice data compiled by the insurance industry. Using data from PHICO insurance company, PHICO's Closed Claims Data, and PHICO's Event Reporting Trending System (PERTS), this article examines the significance and trends of the claims and events reported between 1996 and 1998. Those who misread history are doomed to repeat the mistakes of the past. From a quality improvement perspective, the categorization of the claims and events is useful for reengineering integrated medication delivery, particularly in a hospital setting, and for redesigning drug administration protocols on low therapeutic index medications and "high-risk" drugs. Demonstrable evidence of quality improvement is being required by state laws and by accreditation agencies. The state of Florida requires that quality improvement data be posted quarterly on the Web sites of the health care facilities. Other states have followed suit. The insurance industry is concerned with costs, and medication errors cost money. Even excluding costs of litigation, an adverse drug reaction may cost up to $2500 in hospital resources, and a preventable medication error may cost almost $4700. To monitor costs and assess risk, insurance companies want to know what errors are made and where the system has broken down, permitting the error to occur. Recording and evaluating reliable data on adverse drug events is the first step in improving the quality of pharmacotherapy and increasing patient safety. Cost savings and quality improvement evolve on parallel paths. The PHICO data

  15. Conformation analysis - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ConfC Conformation analysis Data detail Data name Conformation analysis DOI 10.18908/lsdba.n...bdc00400-005 Description of data contents Results of conformation analysis for PDB files (raw data) Each res...ile size: 63.9 MB Simple search URL - Data acquisition method - Data analysis method - Number of data entrie...s 352 entries - About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Conformation analysis - ConfC | LSDB Archive ...

  16. The Usage Analysis of Databases at Ankara University Digital Library

    Directory of Open Access Journals (Sweden)

    Sacit Arslantekin

    2006-12-01

    Full Text Available The development in information and communication technologies has changed and improved resources and services diversity in libraries. These changes continue to develop rapidly throughout the world. As for our country, remarkable developments, especially in university and special libraries, in this field are worth consideration. In order to take benefit of the existing and forthcoming developments in the field of electronic libraries the databases used by clients should be well-demonstrated and followed closely. The providing wide use of electronic databases leads to increasing the productivity of scientific and social information that that is the ultimate goal. The article points out electronic resources management and the effect of consortia developments in the field first, and then evaluates the results of the survey on the use of electronic libraries assessment questionnaires by faculty members at Ankara University.

  17. Complications and patient-injury after ankle fracture surgery. -A closed claim analysis with data from the Patient Compensation Association in Denmark

    DEFF Research Database (Denmark)

    Bjørslev, Naja; Ebskov, Lars Bo; Mersø, Camilla

    2018-01-01

    BACKGROUND: The Patient Compensation Association (PCA) receives claims for financial compensation from patients who believe they have sustained damage from their treatment in the Danish health care system. In this study, we have analysed closed claims in which patients suffered injuries due...... to the surgical treatment of their ankle fracture. We identified causalities contributing to these injuries and malpractices, as well as the economic consequences of these damages. METHODS: Fifty-one approved closed claims from the PCA database from the years 2004-2009 were analysed in a retrospective systematic...... of damages. General recommendations regarding ORIF were not followed in 21/49 of the perioperative damages. The pronation fracture was the most common. The patients received a total average compensation of 17.561 USD each. CONCLUSION: Managing the complex ankle fracture, requires considerable experience...

  18. Health Care Service Utilization of Dementia Patients before and after Institutionalization: A Claims Data Analysis

    Directory of Open Access Journals (Sweden)

    Larissa Schwarzkopf

    2014-06-01

    Full Text Available Background: Community-based and institutional dementia care has been compared in cross-sectional studies, but longitudinal information on the effect of institutionalization on health care service utilization is sparse. Methods: We analyzed claims data from 651 dementia patients via Generalized Estimation Equations to assess health care service utilization profiles and corresponding expenditures from four quarters before to four quarters after institutionalization. Results: In all domains, utilization increased in the quarter of institutionalization. Afterwards, the use of drugs, medical aids, and non-physician services (e.g., occupational therapy and physiotherapy remained elevated, but use of in- and outpatient treatment decreased. Cost of care showed corresponding profiles. Conclusion: Institutional dementia care seems to be associated with an increased demand for supportive services but not necessarily for specialized medical care.

  19. Comparative analysis of cloud cover databases for CORDEX-AFRICA

    Science.gov (United States)

    Enríquez, A.; Taima-Hernández, D.; González, A.; Pérez, J. C.; Díaz, J. P.; Expósito, F. J.

    2012-04-01

    The main objective of the CORDEX program (COordinated Regional climate Downscaling Experiment) [1] is the production of regional climate change scenarios at a global scale, creating a contribution to the IPCC (Intergovernmental Panel on Climate Change) AR5 (5th Assessment Report). Inside this project, Africa is the key region due to the lack of data at this moment. In this study, the cloud cover information obtained through five well-known databases: ERA-40, ERA-Interim, ISCCP, NCEP and CRU, over the CORDEX-AFRICA domain, is analyzed for the period 1984-2000, in order to determine the similarity between them.To analyze the accuracy and consistency of the climate databases, some statistical techniques such as correlation coefficient (r), root mean square (RMS) differences and a defined skill score (SS), based on the difference between areas of the probability density functions (PDFs) associated to study parameters [2], were applied. Thus which databases are well-related in different regions and which not are determined, establishing an appropriate framework which could be used to validate the AR5 models in historical simulations.

  20. Analysis of treatment patterns and persistence on branded and generic medications in major depressive disorder using retrospective claims data

    Directory of Open Access Journals (Sweden)

    Solem CT

    2016-10-01

    Full Text Available Caitlyn T Solem,1 Ahmed Shelbaya,2,3 Yin Wan,1 Chinmay G Deshpande,1 Jose Alvir,2 Elizabeth Pappadopulos2 1Pharmerit International, Real World Evidence/Data Analytics, Bethesda, MD, 2Pfizer, Inc., Global Health Outcomes, New York, NY, 3Epidemiology Department of Mailman’s School of Public Health, Columbia University Mailman School of Public Health, New York, NY, USA Background: In major depressive disorder (MDD, treatment persistence is critical to optimize symptom remission, functional recovery, and health care costs. Desvenlafaxine tends to have fewer drug interactions and better tolerability than other MDD drugs; however, its use has not been assessed in the real world.Objective: The aim of the present study is to compare medication persistence and concomitant MDD drug use with branded desvenlafaxine (Pristiq® compared with antidepressant drug groups classified as 1 branded selective serotonin reuptake inhibitors (SSRIs; ie, escitalopram [Lexapro™] and selective serotonin–norepinephrine reuptake inhibitors (SNRIs; ie, venlafaxine [Effexor®], duloxetine [Cymbalta®] and 2 generic SSRIs/SNRIs (ie, escitalopram, citalopram, venlafaxine, fluvoxamine, fluoxetine, sertraline, paroxetine, and duloxetine.Patients and methods: MDD patients (ICD-9-CM codes 296.2, 296.3, with ≥2 prescription fills for study drugs and 12-month preindex continuous enrollment from the MarketScan Commercial Claims and Encounters Database (2009–2013, were included. Time-to-treatment discontinuation (prescription gap ≥45 days was assessed using the Kaplan–Meier curve and Cox model. Concomitant MDD drug use was compared.Results: Of the 273,514 patients included, 14,379 patients were initiated with branded desvenlafaxine, 50,937 patients with other branded SSRIs/SNRIs, and 208,198 patients with generic SSRIs/SNRIs. The number of weeks for treatment discontinuation for branded desvenlafaxine were longer (40.7 [95% CI: 39.3, 42.0] compared with other branded

  1. Consumer attitudes and understanding of low-sodium claims on food: an analysis of healthy and hypertensive individuals.

    Science.gov (United States)

    Wong, Christina L; Arcand, JoAnne; Mendoza, Julio; Henson, Spencer J; Qi, Ying; Lou, Wendy; L'Abbé, Mary R

    2013-06-01

    Sodium-related claims on food labels should facilitate lower-sodium food choices; however, consumer attitudes and understanding of such claims are unknown. We evaluated consumer attitudes and understanding of different types of sodium claims and the effect of having hypertension on responses to such claims. Canadian consumers (n = 506), with and without hypertension, completed an online survey that contained a randomized mock-package experiment, which tested 4 packages that differed only by the claims they carried as follows: 3 sodium claims (disease risk reduction, function, and nutrient-content claims) and a tastes-great claim (control). Participants answered the same questions on attitudes and understanding of claims after seeing each package. Food packages with any sodium claim resulted in more positive attitudes toward the claim and the product healthfulness than did packages with the taste control claim, although all mock packages were identical nutritionally. Having hypertension increased ratings related to product healthfulness and purchase intentions, but there was no difference in reported understanding between hypertensives and normotensives. In general, participants attributed additional health benefits to low-sodium products beyond the well-established relation of sodium and hypertension. Sodium claims have the potential to facilitate lower-sodium food choices. However, we caution that consumers do not seem to differentiate between different types of claims, but the nutritional profiles of foods that carry different sodium claims can potentially differ greatly in the current labeling environment. Additional educational efforts are needed to ensure that consumers do not attribute inappropriate health benefits to foods with low-sodium claims. This trial was registered at clinicaltrials.gov as NCT01764724.

  2. The incidence of Grey Literature in online databases : a quantitative analysis

    OpenAIRE

    Luzi, Daniela (CNR-ISRDS); GreyNet, Grey Literature Network Service

    1994-01-01

    This study aims to verify the diffusion and distribution of Grey Literature (GL) documents in commercially available online databases. It has been undertaken due to the growing importance of GL in the field of information and documentation, on the one hand, and the increasing supply of online databases, on the other hand. The work is divided into two parts. The first provides the results of a previous quantitative analysis of databases containing GL documents. Using a top-down methodology, i....

  3. Analysis of quality data based on national clinical databases

    DEFF Research Database (Denmark)

    Utzon, Jan; Petri, A.L.; Christophersen, S.

    2009-01-01

    There is little agreement on the philosophy of measuring clinical quality in health care. How data should be analyzed and transformed to healthcare information is an ongoing discussion. To accept a difference in quality between health departments as a real difference, one should consider to which...... extent the selection of patients, random variation, confounding and inconsistency may have influenced results. The aim of this article is to summarize aspects of clinical healthcare data analyses provided from the national clinical quality databases and to show how data may be presented in a way which...... is understandable to readers without specialised knowledge of statistics Udgivelsesdato: 2009/9/14...

  4. Analysis of quality data based on national clinical databases

    DEFF Research Database (Denmark)

    Utzon, Jan; Petri, A.L.; Christophersen, S.

    2009-01-01

    extent the selection of patients, random variation, confounding and inconsistency may have influenced results. The aim of this article is to summarize aspects of clinical healthcare data analyses provided from the national clinical quality databases and to show how data may be presented in a way which......There is little agreement on the philosophy of measuring clinical quality in health care. How data should be analyzed and transformed to healthcare information is an ongoing discussion. To accept a difference in quality between health departments as a real difference, one should consider to which...

  5. Cluster analysis and its application to healthcare claims data: a study of end-stage renal disease patients who initiated hemodialysis.

    Science.gov (United States)

    Liao, Minlei; Li, Yunfeng; Kianifard, Farid; Obi, Engels; Arcona, Stephen

    2016-03-02

    Cluster analysis (CA) is a frequently used applied statistical technique that helps to reveal hidden structures and "clusters" found in large data sets. However, this method has not been widely used in large healthcare claims databases where the distribution of expenditure data is commonly severely skewed. The purpose of this study was to identify cost change patterns of patients with end-stage renal disease (ESRD) who initiated hemodialysis (HD) by applying different clustering methods. A retrospective, cross-sectional, observational study was conducted using the Truven Health MarketScan® Research Databases. Patients aged ≥18 years with ≥2 ESRD diagnoses who initiated HD between 2008 and 2010 were included. The K-means CA method and hierarchical CA with various linkage methods were applied to all-cause costs within baseline (12-months pre-HD) and follow-up periods (12-months post-HD) to identify clusters. Demographic, clinical, and cost information was extracted from both periods, and then examined by cluster. A total of 18,380 patients were identified. Meaningful all-cause cost clusters were generated using K-means CA and hierarchical CA with either flexible beta or Ward's methods. Based on cluster sample sizes and change of cost patterns, the K-means CA method and 4 clusters were selected: Cluster 1: Average to High (n = 113); Cluster 2: Very High to High (n = 89); Cluster 3: Average to Average (n = 16,624); or Cluster 4: Increasing Costs, High at Both Points (n = 1554). Median cost changes in the 12-month pre-HD and post-HD periods increased from $185,070 to $884,605 for Cluster 1 (Average to High), decreased from $910,930 to $157,997 for Cluster 2 (Very High to High), were relatively stable and remained low from $15,168 to $13,026 for Cluster 3 (Average to Average), and increased from $57,909 to $193,140 for Cluster 4 (Increasing Costs, High at Both Points). Relatively stable costs after starting HD were associated with more stable scores

  6. Population-Level Prediction of Type 2 Diabetes From Claims Data and Analysis of Risk Factors.

    Science.gov (United States)

    Razavian, Narges; Blecker, Saul; Schmidt, Ann Marie; Smith-McLallen, Aaron; Nigam, Somesh; Sontag, David

    2015-12-01

    We present a new approach to population health, in which data-driven predictive models are learned for outcomes such as type 2 diabetes. Our approach enables risk assessment from readily available electronic claims data on large populations, without additional screening cost. Proposed model uncovers early and late-stage risk factors. Using administrative claims, pharmacy records, healthcare utilization, and laboratory results of 4.1 million individuals between 2005 and 2009, an initial set of 42,000 variables were derived that together describe the full health status and history of every individual. Machine learning was then used to methodically enhance predictive variable set and fit models predicting onset of type 2 diabetes in 2009-2011, 2010-2012, and 2011-2013. We compared the enhanced model with a parsimonious model consisting of known diabetes risk factors in a real-world environment, where missing values are common and prevalent. Furthermore, we analyzed novel and known risk factors emerging from the model at different age groups at different stages before the onset. Parsimonious model using 21 classic diabetes risk factors resulted in area under ROC curve (AUC) of 0.75 for diabetes prediction within a 2-year window following the baseline. The enhanced model increased the AUC to 0.80, with about 900 variables selected as predictive (p differences between AUCs). Similar improvements were observed for models predicting diabetes onset 1-3 years and 2-4 years after baseline. The enhanced model improved positive predictive value by at least 50% and identified novel surrogate risk factors for type 2 diabetes, such as chronic liver disease (odds ratio [OR] 3.71), high alanine aminotransferase (OR 2.26), esophageal reflux (OR 1.85), and history of acute bronchitis (OR 1.45). Liver risk factors emerge later in the process of diabetes development compared with obesity-related factors such as hypertension and high hemoglobin A1c. In conclusion, population-level risk

  7. Adherence to combined Lamivudine + Zidovudine versus individual components: a community-based retrospective medicaid claims analysis.

    Science.gov (United States)

    Legorreta, A; Yu, A; Chernicoff, H; Gilmore, A; Jordan, J; Rosenzweig, J C

    2005-11-01

    Adherence to a fixed dose combination of dual nucleoside antiretroviral therapy was compared between human immunodeficiency virus (HIV)-infected patients newly started on a fixed dosed combination of lamivudine (3TC) 150 mg/zidovudine (ZDV) 300 mg versus its components taken as separate pills. Medicaid pharmacy claims data were used for analyses. To examine the association between treatment group and medication adherence, three types of multivariate regressions were employed. In addition, all regressions were conducted for the whole population using data from 1995 to 2001 as well as a subpopulation, which excluded data prior to September 1997. Model covariates included patient characteristics, healthcare utilization, and non-study antiretroviral therapy use. The likelihood of > or =95% adherence among patients on combination therapy was three times greater than patients taking 3TC and ZDV in separate pills. Also, combination therapy patients had on average 1.4 fewer adherence failures per year of follow-up and nearly double the time to adherence failure compared to the separate pills group. Consistency among study results suggests that fixed dose combination therapies such as lamivudine (3TC) 150 mg/ zidovudine (ZDV) 300 mg should be considered when prescribing HIV treatment that includes an appropriate dual nucleoside.

  8. A retrospective claims analysis of combination therapy in the treatment of adult attention-deficit/hyperactivity disorder (ADHD

    Directory of Open Access Journals (Sweden)

    Pohl Gerhardt M

    2009-06-01

    Full Text Available Abstract Background Combination therapy in managing psychiatric disorders is not uncommon. While combination therapy has been documented for depression and schizophrenia, little is known about combination therapy practices in managing attention-deficit/hyperactivity disorder (ADHD. This study seeks to quantify the combination use of ADHD medications and to understand predictors of combination therapy. Methods Prescription dispensing events were drawn from a U.S. national claims database including over 80 managed-care plans. Patients studied were age 18 or over with at least 1 medical claim with a diagnosis of ADHD (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] code 314.0, a pharmacy claim for ADHD medication during the study period July2003 to June2004, and continuous enrollment 6 months prior to and throughout the study period. Dispensing events were grouped into 6 categories: atomoxetine (ATX, long-acting stimulants (LAS, intermediate-acting stimulants (IAS, short-acting stimulants (SAS, bupropion (BUP, and Alpha-2 Adrenergic Agonists (A2A. Events were assigned to calendar months, and months with combined use from multiple categories within patient were identified. Predictors of combination therapy for LAS and for ATX were modeled for patients covered by commercial plans using logistic regression in a generalized estimating equations framework to adjust for within-patient correlation between months of observation. Factors included age, gender, presence of the hyperactive component of ADHD, prior diagnoses for psychiatric disorders, claims history of recent psychiatric visit, insurance plan type, and geographic region. Results There were 18,609 patients identified representing a total of 11,886 months of therapy with ATX; 40,949 months with LAS; 13,622 months with IAS; 38,141 months with SAS; 22,087 months with BUP; and 1,916 months with A2A. Combination therapy was present in 19.7% of continuing

  9. A retrospective claims analysis of combination therapy in the treatment of adult attention-deficit/hyperactivity disorder (ADHD).

    Science.gov (United States)

    Pohl, Gerhardt M; Van Brunt, David L; Ye, Wenyu; Stoops, William W; Johnston, Joseph A

    2009-06-08

    Combination therapy in managing psychiatric disorders is not uncommon. While combination therapy has been documented for depression and schizophrenia, little is known about combination therapy practices in managing attention-deficit/hyperactivity disorder (ADHD). This study seeks to quantify the combination use of ADHD medications and to understand predictors of combination therapy. Prescription dispensing events were drawn from a U.S. national claims database including over 80 managed-care plans. Patients studied were age 18 or over with at least 1 medical claim with a diagnosis of ADHD (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] code 314.0), a pharmacy claim for ADHD medication during the study period July 2003 to June 2004, and continuous enrollment 6 months prior to and throughout the study period. Dispensing events were grouped into 6 categories: atomoxetine (ATX), long-acting stimulants (LAS), intermediate-acting stimulants (IAS), short-acting stimulants (SAS), bupropion (BUP), and Alpha-2 Adrenergic Agonists (A2A). Events were assigned to calendar months, and months with combined use from multiple categories within patient were identified. Predictors of combination therapy for LAS and for ATX were modeled for patients covered by commercial plans using logistic regression in a generalized estimating equations framework to adjust for within-patient correlation between months of observation. Factors included age, gender, presence of the hyperactive component of ADHD, prior diagnoses for psychiatric disorders, claims history of recent psychiatric visit, insurance plan type, and geographic region. There were 18,609 patients identified representing a total of 11,886 months of therapy with ATX; 40,949 months with LAS; 13,622 months with IAS; 38,141 months with SAS; 22,087 months with BUP; and 1,916 months with A2A. Combination therapy was present in 19.7% of continuing months (months after the first month of therapy

  10. Claiming Community

    DEFF Research Database (Denmark)

    Jensen, Steffen Bo

    As its point of departure this working paper takes the multitude of different uses and meanings of the concept of community in local politics in Cape Town. Instead of attempting to define it in substantive terms, the paper takes a social constructivist approach to the study of community...... is termed community work. First, the paper explores how community has become a governmental strategy, employed by the apartheid regime as well, although in different ways, as post-apartheid local government. Secondly, the paper explores the ways in which community becomes the means in which local residents...... lay claim on the state, as well as how it enters into local power struggles between different political groups within the township. In the third part, the paper explores how the meanings of community and the struggles to realise it have changed as South Africa, nationally and locally, has become...

  11. Integration of TGS and CTEN assays using the CTENFIT analysis and databasing program

    International Nuclear Information System (INIS)

    Estep, R.

    2000-01-01

    The CTEN F IT program, written for Windows 9x/NT in C++, performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplified record keeping tasks

  12. An analysis of the warning letters issued by the FDA to pharmaceutical manufacturers regarding misleading health outcomes claims

    Directory of Open Access Journals (Sweden)

    Chatterjee S

    2012-12-01

    Full Text Available Objective: To evaluate the number and type of warning letters issued by the US Food and Drug Administration (FDA to pharmaceutical manufacturers for promotional violations.Methods: Two reviewers downloaded, printed and independently evaluated warning letters issued by the FDA to pharmaceutical manufacturers from years 2003-2008. Misleading claims were broadly classified as clinical, Quality-of-Life (QoL, and economic claims. Clinical claims included claims regarding unsubstantiated efficacy, safety and tolerability, superiority, broadening of indication and/or omission of risk information. QoL claims included unsubstantiated quality of life and/or health-related quality of life claims. Economic claims included any form of claim made on behalf of the pharmaceutical companies related to cost superiority of or cost savings from the drug compared to other drugs in the market.Results: In the 6-year study period, 65 warning letters were issued by FDA, which contained 144 clinical, three QoL, and one economic claim. On an average, 11 warning letters were issued per year. Omission of risk information was the most frequently violated claim (30.6% followed by unsubstantiated efficacy claims (18.6%. Warning letters were primarily directed to manufacturers of cardiovascular (14.6%, anti-microbial (14.6%, and CNS (12.5% drugs. Majority of the claims referenced in warning letters contained promotional materials directed to physicians (57%. Conclusion: The study found that misleading clinical outcome claims formed the majority of the promotional violations, and majority of the claims were directed to physicians. Since inadequate promotion of medications may lead to irrational prescribing, the study emphasizes the importance of disseminating reliable, credible, and scientific information to patients, and more importantly, physicians to protect public health.

  13. PACSY, a relational database management system for protein structure and chemical shift analysis.

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  14. PACSY, a relational database management system for protein structure and chemical shift analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States); Yu, Wookyung [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Kim, Suhkmann [Pusan National University, Department of Chemistry and Chemistry Institute for Functional Materials (Korea, Republic of); Chang, Iksoo [Center for Proteome Biophysics, Pusan National University, Department of Physics (Korea, Republic of); Lee, Weontae, E-mail: wlee@spin.yonsei.ac.kr [Yonsei University, Structural Biochemistry and Molecular Biophysics Laboratory, Department of Biochemistry (Korea, Republic of); Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison, and Biochemistry Department (United States)

    2012-10-15

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  15. PACSY, a relational database management system for protein structure and chemical shift analysis

    Science.gov (United States)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  16. PACSY, a relational database management system for protein structure and chemical shift analysis

    International Nuclear Information System (INIS)

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L.

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.eduhttp://pacsy.nmrfam.wisc.edu.

  17. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  18. Hearing Impairment Affects Dementia Incidence. An Analysis Based on Longitudinal Health Claims Data in Germany

    Science.gov (United States)

    Teipel, Stefan; Óvári, Attila; Kilimann, Ingo; Witt, Gabriele; Doblhammer, Gabriele

    2016-01-01

    Recent research has revealed an association between hearing impairment and dementia. The objective of this study is to determine the effect of hearing impairment on dementia incidence in a longitudinal study, and whether ear, nose, and throat (ENT) specialist care, care level, institutionalization, or depression mediates or moderates this pathway. The present study used a longitudinal sample of 154,783 persons aged 65 and older from claims data of the largest German health insurer; containing 14,602 incident dementia diagnoses between 2006 and 2010. Dementia and hearing impairment diagnoses were defined according to International Classification of Diseases, Tenth Revision, codes. We used a Kaplan Meier estimator and performed Cox proportional hazard models to explore the effect of hearing impairment on dementia incidence, controlling for ENT specialist care, care level, institutionalization, and depression. Gender, age, and comorbidities were controlled for as potential confounders. Patients with bilateral (HR = 1.43, pimpairment had higher risks of dementia incidence than patients without hearing impairment. We found no significant effect for unilateral hearing impairment and other diseases of the ear. The effect of hearing impairment was only partly mediated through ENT specialist utilization. Significant interaction between hearing impairment and specialist care, care level, and institutionalization, respectively, indicated moderating effects. We discuss possible explanations for these effects. This study underlines the importance of the association between hearing impairment and dementia. Preserving hearing ability may maintain social participation and may reduce the burden associated with dementia. The particular impact of hearing aid use should be the subject of further investigations, as it offers potential intervention on the pathway to dementia. PMID:27391486

  19. Analysis of Landslide Hazard Impact Using the Landslide Database for Germany

    Science.gov (United States)

    Klose, M.; Damm, B.

    2014-12-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still shows a comprehensive research history in Germany, but only one focused on development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this contribution, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of different case studies in the German Central Uplands. The case study results exemplify database application in analysis of vulnerability to landslides, impact statistics, and hazard or cost modeling.

  20. Utilization and Expenditure of Hospital Admission in Patients with Autism Spectrum Disorder: National Health Insurance Claims Database Analysis

    Science.gov (United States)

    Lin, Jin-Ding; Hung, Wen-Jiu; Lin, Lan-Ping; Lai, Chia-Im

    2011-01-01

    There were not many studies to provide information on health access and health utilization of people with autism spectrum disorders (ASD). The present study describes a general profile of hospital admission and the medical cost among people with ASD, and to analyze the determinants of medical cost. A retrospective study was employed to analyze…

  1. Land Condition Trend Analysis Avian Database: Ecological Guild-based Summaries

    National Research Council Canada - National Science Library

    Schreiber, Eric

    1998-01-01

    Land Condition Trend Analysis (LCTA) bird database documentation capabilities often are limited to the generation of installation-wide species checklists, estimates of relative abundance, and evidence of breeding activity...

  2. Cost benefit analysis of power plant database integration

    International Nuclear Information System (INIS)

    Wilber, B.E.; Cimento, A.; Stuart, R.

    1988-01-01

    A cost benefit analysis of plant wide data integration allows utility management to evaluate integration and automation benefits from an economic perspective. With this evaluation, the utility can determine both the quantitative and qualitative savings that can be expected from data integration. The cost benefit analysis is then a planning tool which helps the utility to develop a focused long term implementation strategy that will yield significant near term benefits. This paper presents a flexible cost benefit analysis methodology which is both simple to use and yields accurate, verifiable results. Included in this paper is a list of parameters to consider, a procedure for performing the cost savings analysis, and samples of this procedure when applied to a utility. A case study is presented involving a specific utility where this procedure was applied. Their uses of the cost-benefit analysis are also described

  3. Distribution and drivers of costs in type 2 diabetes mellitus treated with oral hypoglycemic agents: a retrospective claims data analysis.

    Science.gov (United States)

    Bron, Morgan; Guerin, Annie; Latremouille-Viau, Dominick; Ionescu-Ittu, Raluca; Viswanathan, Prabhakar; Lopez, Claudia; Wu, Eric Q

    2014-09-01

    To describe the distribution of costs and to identify the drivers of high costs among adult patients with type 2 diabetes mellitus (T2DM) receiving oral hypoglycemic agents. T2DM patients using oral hypoglycemic agents and having HbA1c test data were identified from the Truven MarketScan databases of Commercial and Medicare Supplemental insurance claims (2004-2010). All-cause and diabetes-related annual direct healthcare costs were measured and reported by cost components. The 25% most costly patients in the study sample were defined as high-cost patients. Drivers of high costs were identified in multivariate logistic regressions. Total 1-year all-cause costs for the 4104 study patients were $55,599,311 (mean cost per patient = $13,548). Diabetes-related costs accounted for 33.8% of all-cause costs (mean cost per patient = $4583). Medical service costs accounted for the majority of all-cause and diabetes-related total costs (63.7% and 59.5%, respectively), with a minority of patients incurring >80% of these costs (23.5% and 14.7%, respectively). Within the medical claims, inpatient admission for diabetes-complications was the strongest cost driver for both all-cause (OR = 13.5, 95% CI = 8.1-23.6) and diabetes-related costs (OR = 9.7, 95% CI = 6.3-15.1), with macrovascular complications accounting for most inpatient admissions. Other cost drivers included heavier hypoglycemic agent use, diabetes complications, and chronic diseases. The study reports a conservative estimate for the relative share of diabetes-related costs relative to total cost. The findings of this study apply mainly to T2DM patients under 65 years of age. Among the T2DM patients receiving oral hypoglycemic agents, 23.5% of patients incurred 80% of the all-cause healthcare costs, with these costs being driven by inpatient admissions, complications of diabetes, and chronic diseases. Interventions targeting inpatient admissions and/or complications of diabetes may contribute to the decrease of the

  4. Development of a computational database for application in Probabilistic Safety Analysis of nuclear research reactors

    International Nuclear Information System (INIS)

    Macedo, Vagner dos Santos

    2016-01-01

    The objective of this work is to present the computational database that was developed to store technical information and process data on component operation, failure and maintenance for the nuclear research reactors located at the Nuclear and Energy Research Institute (Instituto de Pesquisas Energéticas e Nucleares, IPEN), in São Paulo, Brazil. Data extracted from this database may be applied in the Probabilistic Safety Analysis of these research reactors or in less complex quantitative assessments related to safety, reliability, availability and maintainability of these facilities. This database may be accessed by users of the corporate network, named IPEN intranet. Professionals who require the access to the database must be duly registered by the system administrator, so that they will be able to consult and handle the information. The logical model adopted to represent the database structure is an entity-relationship model, which is in accordance with the protocols installed in IPEN intranet. The open-source relational database management system called MySQL, which is based on the Structured Query Language (SQL), was used in the development of this work. The PHP programming language was adopted to allow users to handle the database. Finally, the main result of this work was the creation a web application for the component reliability database named PSADB, specifically developed for the research reactors of IPEN; furthermore, the database management system provides relevant information efficiently. (author)

  5. Using Earthquake Analysis to Expand the Oklahoma Fault Database

    Science.gov (United States)

    Chang, J. C.; Evans, S. C.; Walter, J. I.

    2017-12-01

    The Oklahoma Geological Survey (OGS) is compiling a comprehensive Oklahoma Fault Database (OFD), which includes faults mapped in OGS publications, university thesis maps, and industry-contributed shapefiles. The OFD includes nearly 20,000 fault segments, but the work is far from complete. The OGS plans on incorporating other sources of data into the OFD, such as new faults from earthquake sequence analyses, geologic field mapping, active-source seismic surveys, and potential fields modeling. A comparison of Oklahoma seismicity and the OFD reveals that earthquakes in the state appear to nucleate on mostly unmapped or unknown faults. Here, we present faults derived from earthquake sequence analyses. From 2015 to present, there has been a five-fold increase in realtime seismic stations in Oklahoma, which has greatly expanded and densified the state's seismic network. The current seismic network not only improves our threshold for locating weaker earthquakes, but also allows us to better constrain focal plane solutions (FPS) from first motion analyses. Using nodal planes from the FPS, HypoDD relocation, and historic seismic data, we can elucidate these previously unmapped seismogenic faults. As the OFD is a primary resource for various scientific investigations, the inclusion of seismogenic faults improves further derivative studies, particularly with respect to seismic hazards. Our primal focus is on four areas of interest, which have had M5+ earthquakes in recent Oklahoma history: Pawnee (M5.8), Prague (M5.7), Fairview (M5.1), and Cushing (M5.0). Subsequent areas of interest will include seismically active data-rich areas, such as the central and northcentral parts of the state.

  6. Analysis of a global database containing tritium in precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. L. [Savannah River Site (SRS), Aiken, SC (United States); Rabun, R. L. [Savannah River Site (SRS), Aiken, SC (United States); Heath, M. [Savannah River Site (SRS), Aiken, SC (United States)

    2016-02-17

    The International Atomic Energy Agency (IAEA) directed the collection of tritium in water samples from the mid-1950s to 2009. The Global Network of Isotopes in Precipitation (GNIP) data examined the airborne movement of isotope releases to the environment, with an objective of collecting spatial data on the isotope content of precipitation across the globe. The initial motivation was to monitor atmospheric thermonuclear test fallout through tritium, deuterium, and oxygen isotope concentrations, but after the 1970s the focus changed to being an observation network of stable hydrogen and oxygen isotope data for hydrologic studies. The GNIP database provides a wealth of tritium data collections over a long period of time. The work performed here primarily examined data features in the past 30 years (after much of the effects of above-ground nuclear testing in the late 1950s to early 1960s decayed away), revealing potentially unknown tritium sources. The available data at GNIP were reorganized to allow for evaluation of trends in the data both temporally and spatially. Several interesting cases were revealed, including relatively high measured concentrations in the Atlantic and Indian Oceans, Russia, Norway, as well as an increase in background concentration at a collector in South Korea after 2004. Recent data from stations in the southeastern United States nearest to the Savannah River Site do not indicate any high values. Meteorological impacts have not been considered in this study. Further research to assess the likely source location of interesting cases using transport simulations and/or literature searches is warranted.

  7. IMG: the integrated microbial genomes database and comparative analysis system

    Science.gov (United States)

    Markowitz, Victor M.; Chen, I-Min A.; Palaniappan, Krishna; Chu, Ken; Szeto, Ernest; Grechkin, Yuri; Ratner, Anna; Jacob, Biju; Huang, Jinghua; Williams, Peter; Huntemann, Marcel; Anderson, Iain; Mavromatis, Konstantinos; Ivanova, Natalia N.; Kyrpides, Nikos C.

    2012-01-01

    The Integrated Microbial Genomes (IMG) system serves as a community resource for comparative analysis of publicly available genomes in a comprehensive integrated context. IMG integrates publicly available draft and complete genomes from all three domains of life with a large number of plasmids and viruses. IMG provides tools and viewers for analyzing and reviewing the annotations of genes and genomes in a comparative context. IMG's data content and analytical capabilities have been continuously extended through regular updates since its first release in March 2005. IMG is available at http://img.jgi.doe.gov. Companion IMG systems provide support for expert review of genome annotations (IMG/ER: http://img.jgi.doe.gov/er), teaching courses and training in microbial genome analysis (IMG/EDU: http://img.jgi.doe.gov/edu) and analysis of genomes related to the Human Microbiome Project (IMG/HMP: http://www.hmpdacc-resources.org/img_hmp). PMID:22194640

  8. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  9. Domain fusion analysis by applying relational algebra to protein sequence and domain databases.

    Science.gov (United States)

    Truong, Kevin; Ikura, Mitsuhiko

    2003-05-06

    Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.

  10. A human friendly reporting and database system for brain PET analysis

    International Nuclear Information System (INIS)

    Jamzad, M.; Ishii, Kenji; Toyama, Hinako; Senda, Michio

    1996-01-01

    We have developed a human friendly reporting and database system for clinical brain PET (Positron Emission Tomography) scans, which enables statistical data analysis on qualitative information obtained from image interpretation. Our system consists of a Brain PET Data (Input) Tool and Report Writing Tool. In the Brain PET Data Tool, findings and interpretations are input by selecting menu icons in a window panel instead of writing a free text. This method of input enables on-line data entry into and update of the database by means of pre-defined consistent words, which facilitates statistical data analysis. The Report Writing Tool generates a one page report of natural English sentences semi-automatically by using the above input information and the patient information obtained from our PET center's main database. It also has a keyword selection function from the report text so that we can save a set of keywords on the database for further analysis. By means of this system, we can store the data related to patient information and visual interpretation of the PET examination while writing clinical reports in daily work. The database files in our system can be accessed by means of commercially available databases. We have used the 4th Dimension database that runs on a Macintosh computer and analyzed 95 cases of 18 F-FDG brain PET studies. The results showed high specificity of parietal hypometabolism for Alzheimer's patients. (author)

  11. Marine Jurisdictions Database

    National Research Council Canada - National Science Library

    Goldsmith, Roger

    1998-01-01

    The purpose of this project was to take the data gathered for the Maritime Claims chart and create a Maritime Jurisdictions digital database suitable for use with oceanographic mission planning objectives...

  12. DEAP: A Database for Emotion Analysis Using Physiological Signals

    NARCIS (Netherlands)

    Koelstra, Sander; Mühl, C.; Soleymani, Mohammad; Lee, Jung Seok; Yazdani, Ashkan; Ebrahimi, Touradj; Pun, Thierry; Nijholt, Antinus; Patras, Ioannis

    2012-01-01

    We present a multimodal dataset for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of

  13. A database analysis of information on multiply charged ions

    International Nuclear Information System (INIS)

    Delcroix, J.L.

    1989-01-01

    A statistical analysis of data related to multiply charged ions, is performed in GAPHYOR data base: over-all statistics by ionization degree from q=1 to q=99, 'historical' development from 1975 to 1987, distribution (for q≥ 5) over physical processes (energy levels, charge exchange,...) and chemical elements

  14. Data-base tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    The authors use a commercial data-base software package to create several data-base products that enhance the ability of experimental physicists to analyze data from the TMX-U experiment. This software resides on a Dec-20 computer in M-Divisions's user service center (USC), where data can be analyzed separately from the main acquisition computers. When these data-base tools are combined with interactive data analysis programs, physicists can perform automated (batch-style) processing or interactive data analysis on the computers in the USC or on the supercomputers of the NMFECC, in addition to the normal processing done on the acquisition system. One data-base tool provides highly reduced data for searching and correlation analysis of several diagnostic signals for a single shot or many shots. A second data-base tool provides retrieval and storage of unreduced data for detailed analysis of one or more diagnostic signals. The authors report how these data-base tools form the core of an evolving off-line data-analysis environment on the USC computers

  15. Costs of conservative management of early-stage prostate cancer compared to radical prostatectomy–a claims data analysis

    Directory of Open Access Journals (Sweden)

    Alina Brandes

    2016-11-01

    Full Text Available Abstract Background Due to widespread PSA testing incidence rates of localized prostate cancer increase but curative treatment is often not required. Overtreatment imposes a substantial economic burden on health care systems. We compared the direct medical costs of conservative management and radical therapy for the management of early-stage prostate cancer in routine care. Methods An observational study design is chosen based on claims data of a German statutory health insurance fund for the years 2008–2011. Three hundred fifty-three age-matched men diagnosed with prostate cancer and treated with conservative management and radical prostatectomy, are included. Individuals with diagnoses of metastases or treatment of advanced prostate cancer are excluded. In an excess cost approach direct medical costs are considered from an insured community perspective for in- and outpatient care, pharmaceuticals, physiotherapy, and assistive technologies. Generalized linear models adjust for comorbidity by Charlson comorbidity score and recycled predictions method calculates per capita costs per treatment strategy. Results After follow-up of 2.5 years per capita costs of conservative management are €6611 lower than costs of prostatectomy ([−9734;−3547], p < 0.0001. Complications increase costs of assistive technologies by 30% (p = 0.0182, but do not influence any other costs. Results are robust to cost outliers and incidence of prostate cancer diagnosis. The short time horizon does not allow assessing long-term consequences of conservative management. Conclusions At a time horizon of 2.5 years, conservative management is preferable to radical prostatectomy in terms of costs. Claims data analysis is limited in the selection of comparable treatment groups, as clinical information is scarce and bias due to non-randomization can only be partly mitigated by matching and confounder adjustment.

  16. Subgroup Analysis of Trials Is Rarely Easy (SATIRE: a study protocol for a systematic review to characterize the analysis, reporting, and claim of subgroup effects in randomized trials

    Directory of Open Access Journals (Sweden)

    Malaga German

    2009-11-01

    Full Text Available Abstract Background Subgroup analyses in randomized trials examine whether effects of interventions differ between subgroups of study populations according to characteristics of patients or interventions. However, findings from subgroup analyses may be misleading, potentially resulting in suboptimal clinical and health decision making. Few studies have investigated the reporting and conduct of subgroup analyses and a number of important questions remain unanswered. The objectives of this study are: 1 to describe the reporting of subgroup analyses and claims of subgroup effects in randomized controlled trials, 2 to assess study characteristics associated with reporting of subgroup analyses and with claims of subgroup effects, and 3 to examine the analysis, and interpretation of subgroup effects for each study's primary outcome. Methods We will conduct a systematic review of 464 randomized controlled human trials published in 2007 in the 118 Core Clinical Journals defined by the National Library of Medicine. We will randomly select journal articles, stratified in a 1:1 ratio by higher impact versus lower impact journals. According to 2007 ISI total citations, we consider the New England Journal of Medicine, JAMA, Lancet, Annals of Internal Medicine, and BMJ as higher impact journals. Teams of two reviewers will independently screen full texts of reports for eligibility, and abstract data, using standardized, pilot-tested extraction forms. We will conduct univariable and multivariable logistic regression analyses to examine the association of pre-specified study characteristics with reporting of subgroup analyses and with claims of subgroup effects for the primary and any other outcomes. Discussion A clear understanding of subgroup analyses, as currently conducted and reported in published randomized controlled trials, will reveal both strengths and weaknesses of this practice. Our findings will contribute to a set of recommendations to optimize

  17. Scanning an individual monitoring database for multiple occurrences using bi-gram analysis

    International Nuclear Information System (INIS)

    Van Dijk, J. W. E.

    2007-01-01

    Maintaining the integrity of the databases is one of the important aspects of quality assurance at individual monitoring services and national dose registers. This paper presents a method for finding and preventing the occurrence of duplicate entries in the databases that can occur, e.g. because of a variable spelling or misspelling of the name. The method is based on bi-gram text analysis techniques. The methods can also be used for retrieving dose data in historical databases in the framework of dose reconstruction efforts of persons of whom the spelling of the name as originally entered, possibly decades ago, is uncertain. (authors)

  18. What can we learn from patient claims? - A retrospective analysis of incidence and patterns of adverse events after orthopaedic procedures in Sweden

    Directory of Open Access Journals (Sweden)

    Öhrn Annica

    2012-01-01

    Full Text Available Abstract Background Objective data on the incidence and pattern of adverse events after orthopaedic surgical procedures remain scarce, secondary to the reluctance for encompassing reporting of surgical complications. The aim of this study was to analyze the nature of adverse events after orthopaedic surgery reported to a national database for patient claims in Sweden. Methods In this retrospective review data from two Swedish national databases during a 4-year period were analyzed. We used the "County Councils' Mutual Insurance Company", a national no-fault insurance system for patient claims, and the "National Patient Register at the National Board of Health and Welfare". Results A total of 6,029 patient claims filed after orthopaedic surgery were assessed during the study period. Of those, 3,336 (55% were determined to be adverse events, which received financial compensation. Hospital-acquired infections and sepsis were the most common causes of adverse events (n = 741; 22%. The surgical procedure that caused the highest rate of adverse events was "decompression of spinal cord and nerve roots" (code ABC**, with 168 adverse events of 17,507 hospitals discharges (1%. One in five (36 of 168; 21.4% injured patient was seriously disabled or died. Conclusions We conclude that patients undergoing spinal surgery run the highest risk of being severely injured and that these patients also experienced a high degree of serious disability. The most common adverse event was related to hospital acquired infections. Claims data obtained in a no-fault system have a high potential for identifying adverse events and learning from them.

  19. Comparative analysis of perioperative complications between a multicenter prospective cervical deformity database and the Nationwide Inpatient Sample database.

    Science.gov (United States)

    Passias, Peter G; Horn, Samantha R; Jalai, Cyrus M; Poorman, Gregory; Bono, Olivia J; Ramchandran, Subaraman; Smith, Justin S; Scheer, Justin K; Sciubba, Daniel M; Hamilton, D Kojo; Mundis, Gregory; Oh, Cheongeun; Klineberg, Eric O; Lafage, Virginie; Shaffrey, Christopher I; Ames, Christopher P

    2017-11-01

    Complication rates for adult cervical deformity are poorly characterized given the complexity and heterogeneity of cases. To compare perioperative complication rates following adult cervical deformity corrective surgery between a prospective multicenter database for patients with cervical deformity (PCD) and the Nationwide Inpatient Sample (NIS). Retrospective review of prospective databases. A total of 11,501 adult patients with cervical deformity (11,379 patients from the NIS and 122 patients from the PCD database). Perioperative medical and surgical complications. The NIS was queried (2001-2013) for cervical deformity discharges for patients ≥18 years undergoing cervical fusions using International Classification of Disease, Ninth Revision (ICD-9) coding. Patients ≥18 years from the PCD database (2013-2015) were selected. Equivalent complications were identified and rates were compared. Bonferroni correction (pdatabases. A total of 11,379 patients from the NIS database and 122 patiens from the PCD database were identified. Patients from the PCD database were older (62.49 vs. 55.15, pdatabase. The PCD database had an increased risk of reporting overall complications than the NIS (odds ratio: 2.81, confidence interval: 1.81-4.38). Only device-related complications were greater in the NIS (7.1% vs. 1.1%, p=.007). Patients from the PCD database displayed higher rates of the following complications: peripheral vascular (0.8% vs. 0.1%, p=.001), gastrointestinal (GI) (2.5% vs. 0.2%, pdatabases (p>.004). Based on surgicalapproach, the PCD reported higher GI and neurologic complication rates for combined anterior-posterior procedures (pdatabase revealed higher overall and individual complication rates and higher data granularity. The nationwide database may underestimate complications of patients with adult cervical deformity (ACD) particularly in regard to perioperative surgical details owing to coding and deformity generalizations. The surgeon-maintained database

  20. National Geo-Database for Biofuel Simulations and Regional Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Izaurralde, Roberto C.; Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, David H.

    2012-04-01

    performance of EPIC and, when necessary, improve its parameterization. We investigated three scenarios. In the first, we simulated a historical (current) baseline scenario composed mainly of corn-, soybean-, and wheat-based rotations as grown existing croplands east of the Rocky Mountains in 30 states. In the second scenario, we simulated a modified baseline in which we harvested corn and wheat residues to supply feedstocks to potential cellulosic ethanol biorefineries distributed within the study area. In the third scenario, we simulated the productivity of perennial cropping systems such as switchgrass or perennial mixtures grown on either marginal or Conservation Reserve Program (CRP) lands. In all cases we evaluated the environmental impacts (e.g., soil carbon changes, soil erosion, nitrate leaching, etc.) associated with the practices. In summary, we have reported on the development of a spatially explicit national geodatabase to conduct biofuel simulation studies and provided initial simulation results on the potential of annual and perennial cropping systems to serve as feedstocks for the production of cellulosic ethanol. To accomplish this, we have employed sophisticated spatial analysis methods in combination with the process-based biogeochemical model EPIC. This work provided the opportunity to test the hypothesis that marginal lands can serve as sources of cellulosic feedstocks and thus contribute to avoid potential conflicts between bioenergy and food production systems. This work, we believe, opens the door for further analysis on the characteristics of cellulosic feedstocks as major contributors to the development of a sustainable bioenergy economy.

  1. Expectations in the field of the internet and health: an analysis of claims about social networking sites in clinical literature.

    Science.gov (United States)

    Koteyko, Nelya; Hunt, Daniel; Gunter, Barrie

    2015-03-01

    This article adopts a critical sociological perspective to examine the expectations surrounding the uses of social networking sites (SNSs) articulated in the domain of clinical literature. This emerging body of articles and commentaries responds to the recent significant growth in SNS use, and constitutes a venue in which the meanings of SNSs and their relation to health are negotiated. Our analysis indicates how clinical writing configures the role of SNSs in health care through a range of metaphorical constructions that frame SNSs as a tool, a conduit for information and a traversable space. The use of such metaphors serves not only to describe the new affordances offered by SNSs but also posits distinct lay and professional practices, while reviving a range of celebratory claims about the Internet and health critiqued in sociological literature. These metaphorical descriptions characterise SNS content as essentially controllable by autonomous users while reiterating existing arguments that e-health is both inherently empowering and risky. Our analysis calls for a close attention to these understandings of SNSs as they have the potential to shape future online initiatives, most notably by anticipating successful professional interventions while marginalising the factors that influence users' online and offline practices and contexts. © 2015 The Authors. Sociology of Health & Illness published by John Wiley & Sons Ltd on behalf of Foundation for Sociology of Health & Illness.

  2. Expectations in the field of the Internet and health: an analysis of claims about social networking sites in clinical literature

    Science.gov (United States)

    Koteyko, Nelya; Hunt, Daniel; Gunter, Barrie

    2015-01-01

    This article adopts a critical sociological perspective to examine the expectations surrounding the uses of social networking sites (SNSs) articulated in the domain of clinical literature. This emerging body of articles and commentaries responds to the recent significant growth in SNS use, and constitutes a venue in which the meanings of SNSs and their relation to health are negotiated. Our analysis indicates how clinical writing configures the role of SNSs in health care through a range of metaphorical constructions that frame SNSs as a tool, a conduit for information and a traversable space. The use of such metaphors serves not only to describe the new affordances offered by SNSs but also posits distinct lay and professional practices, while reviving a range of celebratory claims about the Internet and health critiqued in sociological literature. These metaphorical descriptions characterise SNS content as essentially controllable by autonomous users while reiterating existing arguments that e-health is both inherently empowering and risky. Our analysis calls for a close attention to these understandings of SNSs as they have the potential to shape future online initiatives, most notably by anticipating successful professional interventions while marginalising the factors that influence users’ online and offline practices and contexts. PMID:25847533

  3. Risk of new acute myocardial infarction hospitalization associated with use of oral and parenteral non-steroidal anti-inflammation drugs (NSAIDs: a case-crossover study of Taiwan's National Health Insurance claims database and review of current evidence

    Directory of Open Access Journals (Sweden)

    Shau Wen-Yi

    2012-02-01

    Full Text Available Abstract Background Previous studies have documented the increased cardiovascular risk associated with the use of some nonsteroidal anti-inflammatory drugs (NSAIDs. Despite this, many old NSAIDs are still prescribed worldwide. Most of the studies to date have been focused on specific oral drugs or limited by the number of cases examined. We studied the risk of new acute myocardial infarction (AMI hospitalization with current use of a variety of oral and parenteral NSAIDs in a nationwide population, and compared our results with existing evidence. Methods We conducted a case-crossover study using the Taiwan's National Health Insurance claim database, identifying patients with new AMI hospitalized in 2006. The 1-30 days and 91-120 days prior to the admission were defined as case and matched control period for each patient, respectively. Uses of NSAIDs during the respective periods were compared using conditional logistic regression and adjusted for use of co-medications. Results 8354 new AMI hospitalization patients fulfilled the study criteria. 14 oral and 3 parenteral NSAIDs were selected based on drug utilization profile among 13.7 million NSAID users. The adjusted odds ratio, aOR (95% confidence interval, for risk of AMI and use of oral and parenteral non-selective NSAIDs were 1.42 (1.29, 1.56 and 3.35 (2.50, 4.47, respectively, and significantly greater for parenteral than oral drugs (p for interaction Conclusions The collective evidence revealed the tendency of increased AMI risk with current use of some NSAIDs. A higher AMI risk associated with use of parenteral NSAIDs was observed in the present study. Ketorolac had the highest associated risk in both oral and parenteral NSAIDs studied. Though further investigation to confirm the association is warranted, prescribing physicians and the general public should be cautious about the potential risk of AMI when using NSAIDs.

  4. Database tools for enhanced analysis of TMX-U data. Revision 1

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  5. Medicare Part D Claims Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — This page contains information on Part D claims data for the purposes of research, analysis, reporting, and public health functions. These data will also be used to...

  6. Toward a public analysis database for LHC new physics searches using M ADA NALYSIS 5

    Science.gov (United States)

    Dumont, B.; Fuks, B.; Kraml, S.; Bein, S.; Chalons, G.; Conte, E.; Kulkarni, S.; Sengupta, D.; Wymant, C.

    2015-02-01

    We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.

  7. Administrative claims analysis of asthma-related health care utilization for patients who received inhaled corticosteroids with either montelukast or salmeterol as combination therapy.

    Science.gov (United States)

    Allen-Ramey, Felicia C; Bukstein, Don; Luskin, Allan; Sajjan, Shiva G; Markson, Leona E

    2006-05-01

    To compare asthma-related health care resource utilization among a matched cohort of asthma patients using inhaled corticosteroids (ICSs) plus either montelukast (MON) or salmeterol (SAL) as combination therapy for asthma, during a time prior to the availability of fixed-dose combinations of ICS/SAL. A retrospective analysis using the PHARMetrics patient-centric claims database was conducted for the period preceding the market introduction of combination fluticasone-SAL in September 2000. Patients had to meet the following criteria for inclusion in the study: they had to be between the ages of 4 and 55 years; they had to have been continuously enrolled for 2 years; they had to have initiated ICS/MON or ICS/SAL therapy between July 1, 1998, and June 30, 1999; and they had to have had either (a) a diagnosis of asthma (based on International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes of 493.xx) for 2 outpatient visits, 1 or more emergency department (ED) visits, or 1 or more hospitalizations within 1 year or (b) pharmacy claim records that contained a National Drug Code for an antiasthma medication (betaagonist, theophylline, ICS, cromolyn, or leukotriene) 2 or more times within 1 year. ICS/MON and ICS/SAL patients were matched 1 to 1 on age and propensity score. Outcomes included asthma-related hopitalizations and ED visits with ICD-9-CM codes of 493.xx, and oral corticosteroid (OCS) fills and short-acting beta-agonist (SABA) fills. Multivariate regression analyses were performed. Subgroup analyses based on sequential or concurrent initiation of combination therapy were also conducted. A total of 1,216 patients were matched (ICS/MON = 608; ICS/SAL= 608). Decreased odds of ED visits and/or hospitalizations were observed with ICS/MON (adjusted odds ratio [OR] = 0.58; 95% confidence interval [CI], 0.35- 0.98) versus ICS/SAL. The odds of postindex OCS fills were not different for ICS/MON and ICS/SAL patients (adjusted OR = 1.04; 95

  8. rCAD: A Novel Database Schema for the Comparative Analysis of RNA.

    Science.gov (United States)

    Ozer, Stuart; Doshi, Kishore J; Xu, Weijia; Gutell, Robin R

    2011-12-31

    Beyond its direct involvement in protein synthesis with mRNA, tRNA, and rRNA, RNA is now being appreciated for its significance in the overall metabolism and regulation of the cell. Comparative analysis has been very effective in the identification and characterization of RNA molecules, including the accurate prediction of their secondary structure. We are developing an integrative scalable data management and analysis system, the RNA Comparative Analysis Database (rCAD), implemented with SQL Server to support RNA comparative analysis. The platformagnostic database schema of rCAD captures the essential relationships between the different dimensions of information for RNA comparative analysis datasets. The rCAD implementation enables a variety of comparative analysis manipulations with multiple integrated data dimensions for advanced RNA comparative analysis workflows. In this paper, we describe details of the rCAD schema design and illustrate its usefulness with two usage scenarios.

  9. mESAdb: microRNA expression and sequence analysis database.

    Science.gov (United States)

    Kaya, Koray D; Karakülah, Gökhan; Yakicier, Cengiz M; Acar, Aybar C; Konu, Ozlen

    2011-01-01

    microRNA expression and sequence analysis database (http://konulab.fen.bilkent.edu.tr/mirna/) (mESAdb) is a regularly updated database for the multivariate analysis of sequences and expression of microRNAs from multiple taxa. mESAdb is modular and has a user interface implemented in PHP and JavaScript and coupled with statistical analysis and visualization packages written for the R language. The database primarily comprises mature microRNA sequences and their target data, along with selected human, mouse and zebrafish expression data sets. mESAdb analysis modules allow (i) mining of microRNA expression data sets for subsets of microRNAs selected manually or by motif; (ii) pair-wise multivariate analysis of expression data sets within and between taxa; and (iii) association of microRNA subsets with annotation databases, HUGE Navigator, KEGG and GO. The use of existing and customized R packages facilitates future addition of data sets and analysis tools. Furthermore, the ability to upload and analyze user-specified data sets makes mESAdb an interactive and expandable analysis tool for microRNA sequence and expression data.

  10. Academic impact of a public electronic health database: bibliometric analysis of studies using the general practice research database.

    Directory of Open Access Journals (Sweden)

    Yu-Chun Chen

    Full Text Available BACKGROUND: Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. METHODOLOGY AND FINDINGS: A total of 749 studies published between 1995 and 2009 with 'General Practice Research Database' as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors contributed nearly half (47.9% of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of "Pharmacology and Pharmacy", "General and Internal Medicine", and "Public, Environmental and Occupational Health". The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. CONCLUSIONS: A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research.

  11. Trends in compulsory licensing of pharmaceuticals since the Doha Declaration: a database analysis.

    Science.gov (United States)

    Beall, Reed; Kuhn, Randall

    2012-01-01

    It is now a decade since the World Trade Organization (WTO) adopted the "Declaration on the TRIPS Agreement and Public Health" at its 4th Ministerial Conference in Doha. Many anticipated that these actions would lead nations to claim compulsory licenses (CLs) for pharmaceutical products with greater regularity. A CL is the use of a patented innovation that has been licensed by a state without the permission of the patent title holder. Skeptics doubted that many CLs would occur, given political pressure against CL activity and continued health system weakness in poor countries. The subsequent decade has seen little systematic assessment of the Doha Declaration's impact. We assembled a database of all episodes in which a CL was publically entertained or announced by a WTO member state since 1995. Broad searches of CL activity were conducted using media, academic, and legal databases, yielding 34 potential CL episodes in 26 countries. Country- and product-specific searches were used to verify government participation, resulting in a final database of 24 verified CLs in 17 nations. We coded CL episodes in terms of outcome, national income, and disease group over three distinct periods of CL activity. Most CL episodes occurred between 2003 and 2005, involved drugs for HIV/AIDS, and occurred in upper-middle-income countries (UMICs). Aside from HIV/AIDS, few CL episodes involved communicable disease, and none occurred in least-developed or low-income countries. Given skepticism about the Doha Declaration's likely impact, we note the relatively high occurrence of CLs, yet CL activity has diminished markedly since 2006. While UMICs have high CL activity and strong incentives to use CLs compared to other countries, we note considerable countervailing pressures against CL use even in UMICs. We conclude that there is a low probability of continued CL activity. We highlight the need for further systematic evaluation of global health governance actions. Please see later in the

  12. Trends in compulsory licensing of pharmaceuticals since the Doha Declaration: a database analysis.

    Directory of Open Access Journals (Sweden)

    Reed Beall

    2012-01-01

    Full Text Available BACKGROUND: It is now a decade since the World Trade Organization (WTO adopted the "Declaration on the TRIPS Agreement and Public Health" at its 4th Ministerial Conference in Doha. Many anticipated that these actions would lead nations to claim compulsory licenses (CLs for pharmaceutical products with greater regularity. A CL is the use of a patented innovation that has been licensed by a state without the permission of the patent title holder. Skeptics doubted that many CLs would occur, given political pressure against CL activity and continued health system weakness in poor countries. The subsequent decade has seen little systematic assessment of the Doha Declaration's impact. METHODS AND FINDINGS: We assembled a database of all episodes in which a CL was publically entertained or announced by a WTO member state since 1995. Broad searches of CL activity were conducted using media, academic, and legal databases, yielding 34 potential CL episodes in 26 countries. Country- and product-specific searches were used to verify government participation, resulting in a final database of 24 verified CLs in 17 nations. We coded CL episodes in terms of outcome, national income, and disease group over three distinct periods of CL activity. Most CL episodes occurred between 2003 and 2005, involved drugs for HIV/AIDS, and occurred in upper-middle-income countries (UMICs. Aside from HIV/AIDS, few CL episodes involved communicable disease, and none occurred in least-developed or low-income countries. CONCLUSIONS: Given skepticism about the Doha Declaration's likely impact, we note the relatively high occurrence of CLs, yet CL activity has diminished markedly since 2006. While UMICs have high CL activity and strong incentives to use CLs compared to other countries, we note considerable countervailing pressures against CL use even in UMICs. We conclude that there is a low probability of continued CL activity. We highlight the need for further systematic

  13. Nuclear power plant control room crew task analysis database: SEEK system. Users manual

    International Nuclear Information System (INIS)

    Burgy, D.; Schroeder, L.

    1984-05-01

    The Crew Task Analysis SEEK Users Manual was prepared for the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission. It is designed for use with the existing computerized Control Room Crew Task Analysis Database. The SEEK system consists of a PR1ME computer with its associated peripherals and software augmented by General Physics Corporation SEEK database management software. The SEEK software programs provide the Crew Task Database user with rapid access to any number of records desired. The software uses English-like sentences to allow the user to construct logical sorts and outputs of the task data. Given the multiple-associative nature of the database, users can directly access the data at the plant, operating sequence, task or element level - or any combination of these levels. A complete description of the crew task data contained in the database is presented in NUREG/CR-3371, Task Analysis of Nuclear Power Plant Control Room Crews (Volumes 1 and 2)

  14. PseudoMLSA: a database for multigenic sequence analysis of Pseudomonas species

    Directory of Open Access Journals (Sweden)

    Lalucat Jorge

    2010-04-01

    Full Text Available Abstract Background The genus Pseudomonas comprises more than 100 species of environmental, clinical, agricultural, and biotechnological interest. Although, the recommended method for discriminating bacterial species is DNA-DNA hybridisation, alternative techniques based on multigenic sequence analysis are becoming a common practice in bacterial species discrimination studies. Since there is not a general criterion for determining which genes are more useful for species resolution; the number of strains and genes analysed is increasing continuously. As a result, sequences of different genes are dispersed throughout several databases. This sequence information needs to be collected in a common database, in order to be useful for future identification-based projects. Description The PseudoMLSA Database is a comprehensive database of multiple gene sequences from strains of Pseudomonas species. The core of the database is composed of selected gene sequences from all Pseudomonas type strains validly assigned to the genus through 2008. The database is aimed to be useful for MultiLocus Sequence Analysis (MLSA procedures, for the identification and characterisation of any Pseudomonas bacterial isolate. The sequences are available for download via a direct connection to the National Center for Biotechnology Information (NCBI. Additionally, the database includes an online BLAST interface for flexible nucleotide queries and similarity searches with the user's datasets, and provides a user-friendly output for easily parsing, navigating, and analysing BLAST results. Conclusions The PseudoMLSA database amasses strains and sequence information of validly described Pseudomonas species, and allows free querying of the database via a user-friendly, web-based interface available at http://www.uib.es/microbiologiaBD/Welcome.html. The web-based platform enables easy retrieval at strain or gene sequence information level; including references to published peer

  15. Academic Impact of a Public Electronic Health Database: Bibliometric Analysis of Studies Using the General Practice Research Database

    Science.gov (United States)

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Background Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. Methodology and Findings A total of 749 studies published between 1995 and 2009 with ‘General Practice Research Database’ as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of “Pharmacology and Pharmacy”, “General and Internal Medicine”, and “Public, Environmental and Occupational Health”. The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. Conclusions A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research. PMID:21731733

  16. Update History of This Database - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods ...B link & Genome analysis methods English archive site is opened. 2012/08/08 PGDBj... Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods is opened. About This...ate History of This Database - PGDBj Registered plant list, Marker list, QTL list, Plant DB link & Genome analysis methods | LSDB Archive ...

  17. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  18. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    International Nuclear Information System (INIS)

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea

    2016-01-01

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  19. Development of a Reference Database for Ion Beam Analysis. Report of a Coordinated Research Project on Reference Database for Ion Beam Analysis

    International Nuclear Information System (INIS)

    2015-12-01

    Ion beam analysis techniques are non-destructive analytical techniques used to identify the composition and provide elemental depth profiles in surface layers of materials. The applications of such techniques are diverse and include environmental control, cultural heritage and conservation and fusion technologies. Their reliability and accuracy depends strongly on our knowledge of the nuclear reaction cross sections, and this publication describes the coordinated effort to measure, compile and evaluate cross section data relevant to these techniques and make these data available to the user community through a comprehensive online database. It includes detailed assessments of experimental cross sections as well as attempts to benchmark these data against appropriate integral measurements

  20. Social Gerontology--Integrative and Territorial Aspects: A Citation Analysis of Subject Scatter and Database Coverage

    Science.gov (United States)

    Lasda Bergman, Elaine M.

    2011-01-01

    To determine the mix of resources used in social gerontology research, a citation analysis was conducted. A representative sample of citations was selected from three prominent gerontology journals and information was added to determine subject scatter and database coverage for the cited materials. Results indicate that a significant portion of…

  1. Validation and application of a physics database for fast reactor fuel cycle analysis

    International Nuclear Information System (INIS)

    McKnight, R.D.; Stillman, J.A.; Toppel, B.J.; Khalil, H.S.

    1994-01-01

    An effort has been made to automate the execution of fast reactor fuel cycle analysis, using EBR-II as a demonstration vehicle, and to validate the analysis results for application to the IFR closed fuel cycle demonstration at EBR-II and its fuel cycle facility. This effort has included: (1) the application of the standard ANL depletion codes to perform core-follow analyses for an extensive series of EBR-II runs, (2) incorporation of the EBR-II data into a physics database, (3) development and verification of software to update, maintain and verify the database files, (4) development and validation of fuel cycle models and methodology, (5) development and verification of software which utilizes this physics database to automate the application of the ANL depletion codes, methods and models to perform the core-follow analysis, and (6) validation studies of the ANL depletion codes and of their application in support of anticipated near-term operations in EBR-II and the Fuel Cycle Facility. Results of the validation tests indicate the physics database and associated analysis codes and procedures are adequate to predict required quantities in support of early phases of FCF operations

  2. Construction and analysis of a microsatellite-based database of european wheat varieties

    NARCIS (Netherlands)

    Röder, M.S.; Wendehake, K.; Korzun, V.; Bredemeijer, G.; Laborie, D.; Bertrand, L.; Isaac, P.; Vosman, B.

    2002-01-01

    A database of 502 recent European wheat varieties, mainly of winter type, was constructed using 19 wheat microsatellites and one secalin-specific marker. All datapoints were generated in at least two laboratories using different techniques for fragment analysis. An overall level of >99.5ccuracy

  3. Integration of the ATLAS tag database with data management and analysis components

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C

    2008-01-01

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted

  4. VATE: VAlidation of high TEchnology based on large database analysis by learning machine

    NARCIS (Netherlands)

    Meldolesi, E; Van Soest, J; Alitto, A R; Autorino, R; Dinapoli, N; Dekker, A; Gambacorta, M A; Gatta, R; Tagliaferri, L; Damiani, A; Valentini, V

    2014-01-01

    The interaction between implementation of new technologies and different outcomes can allow a broad range of researches to be expanded. The purpose of this paper is to introduce the VAlidation of high TEchnology based on large database analysis by learning machine (VATE) project that aims to combine

  5. Integration of the ATLAS tag database with data management and analysis components

    Energy Technology Data Exchange (ETDEWEB)

    Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C [Department of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, Scotland (United Kingdom)], E-mail: c.nicholson@physics.gla.ac.uk

    2008-07-15

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted.

  6. Analysis of the NIST database towards the composition of vulnerabilities in attack scenarios

    NARCIS (Netherlands)

    Nunes Leal Franqueira, V.; van Keulen, Maurice

    The composition of vulnerabilities in attack scenarios has been traditionally performed based on detailed pre- and post-conditions. Although very precise, this approach is dependent on human analysis, is time consuming, and not at all scalable. We investigate the NIST National Vulnerability Database

  7. Analysis of Documents Published in Scopus Database on Foreign Language Learning through Mobile Learning: A Content Analysis

    Science.gov (United States)

    Uzunboylu, Huseyin; Genc, Zeynep

    2017-01-01

    The purpose of this study is to determine the recent trends in foreign language learning through mobile learning. The study was conducted employing document analysis and related content analysis among the qualitative research methodology. Through the search conducted on Scopus database with the key words "mobile learning and foreign language…

  8. Failure and Maintenance Analysis Using Web-Based Reliability Database System

    International Nuclear Information System (INIS)

    Hwang, Seok Won; Kim, Myoung Su; Seong, Ki Yeoul; Na, Jang Hwan; Jerng, Dong Wook

    2007-01-01

    Korea Hydro and Nuclear Power Company has lunched the development of a database system for PSA and Maintenance Rule implementation. It focuses on the easy processing of raw data into a credible and useful database for the risk-informed environment of nuclear power plant operation and maintenance. Even though KHNP had recently completed the PSA for all domestic NPPs as a requirement of the severe accident mitigation strategy, the component failure data were only gathered as a means of quantification purposes for the relevant project. So, the data were not efficient enough for the Living PSA or other generic purposes. Another reason to build a real time database is for the newly adopted Maintenance Rule, which requests the utility to continuously monitor the plant risk based on its operation and maintenance performance. Furthermore, as one of the pre-condition for the Risk Informed Regulation and Application, the nuclear regulatory agency of Korea requests the development and management of domestic database system. KHNP is stacking up data of operation and maintenance on the Enterprise Resource Planning (ERP) system since its first opening on July, 2003. But, so far a systematic review has not been performed to apply the component failure and maintenance history for PSA and other reliability analysis. The data stored in PUMAS before the ERP system is introduced also need to be converted and managed into the new database structure and methodology. This reliability database system is a web-based interface on a UNIX server with Oracle relational database. It is designed to be applicable for all domestic NPPs with a common database structure and the web interfaces, therefore additional program development would not be necessary for data acquisition and processing in the near future. Categorization standards for systems and components have been implemented to analyze all domestic NPPs. For example, SysCode (for a system code) and CpCode (for a component code) were newly

  9. A clinical analysis of 500 medico-legal claims evaluating the causes and assessing the potential benefit of alternative dispute resolution.

    Science.gov (United States)

    B-Lynch, C; Coker, A; Dua, J A

    1996-12-01

    1. To evaluate the common causes of medico-legal dispute in obstetrics and gynaecology. 2. To assess the potential benefit of early alternative dispute resolution. A prospective analysis of over 500 cases submitted from over 100 solicitors between 1984 and 1994 for medical expert opinion on potential medico-legal claims. Five hundred consecutive cases that met the inclusion criteria: 488 from the United Kingdom and 12 from abroad (Hong Kong, Republic of Ireland). The main principles underlining medico-legal disputes and causes of such claims. Analysis of 500 claims show 46% were misguided allegations, 19% incompetent care, 12% error of judgement, 9% lack of expertise, 7% failure of communication, 6% poor supervision and 1% inadequate staffing. Of the misguided allegations 119/225 cases (59%) were obstetric and 111/275 (40%) cases were gynaecological. The most common cause of obstetric dispute was "cerebral palsy' (22%), while the commonest cause of gynaecological dispute was failed sterilisation (19%). Settled claims were under-reported by solicitors. Because of the high percentage (46%) of misguided allegations, an alternative course of dispute resolution must be a realistic way forward. This course of action, combined with improved communication, could result in a major reduction in the costs of potential medical litigation. Early alternative dispute resolution should be considered in an attempt to reduce the escalating quantum of damages and costs. We recommend recruiting independent, experienced and unbiased consultants in active practice within the appropriate specialty to review such cases at the level of hospital complaints management as an in house review procedure, particularly for small and moderate-sized claims, as a means whereby doctors can retain control of medico-legal disputes, in contrast to control by the legal profession.

  10. Database Description - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available on: Department of Biosciences, Faculty of Science and Engineering, Teikyo Univers...ity Contact address 1-1, Toyosatodai, Utsunomiya-shi, Tochigi 320-8551 Japan Department of Biosciences, Faculty of Science and Engine...ering, Teikyo University Tomoko Shinomura E-mail : Database classification Plant da

  11. Does an Otolaryngology-Specific Database Have Added Value? A Comparative Feasibility Analysis.

    Science.gov (United States)

    Bellmunt, Angela M; Roberts, Rhonda; Lee, Walter T; Schulz, Kris; Pynnonen, Melissa A; Crowson, Matthew G; Witsell, David; Parham, Kourosh; Langman, Alan; Vambutas, Andrea; Ryan, Sheila E; Shin, Jennifer J

    2016-07-01

    There are multiple nationally representative databases that support epidemiologic and outcomes research, and it is unknown whether an otolaryngology-specific resource would prove indispensable or superfluous. Therefore, our objective was to determine the feasibility of analyses in the National Ambulatory Medical Care Survey (NAMCS) and National Hospital Ambulatory Medical Care Survey (NHAMCS) databases as compared with the otolaryngology-specific Creating Healthcare Excellence through Education and Research (CHEER) database. Parallel analyses in 2 data sets. Ambulatory visits in the United States. To test a fixed hypothesis that could be directly compared between data sets, we focused on a condition with expected prevalence high enough to substantiate availability in both. This query also encompassed a broad span of diagnoses to sample the breadth of available information. Specifically, we compared an assessment of suspected risk factors for sensorineural hearing loss in subjects 0 to 21 years of age, according to a predetermined protocol. We also assessed the feasibility of 6 additional diagnostic queries among all age groups. In the NAMCS/NHAMCS data set, the number of measured observations was not sufficient to support reliable numeric conclusions (percentage standard error among risk factors: 38.6-92.1). Analysis of the CHEER database demonstrated that age, sex, meningitis, and cytomegalovirus were statistically significant factors associated with pediatric sensorineural hearing loss (P < .01). Among the 6 additional diagnostic queries assessed, NAMCS/NHAMCS usage was also infeasible; the CHEER database contained 1585 to 212,521 more observations per annum. An otolaryngology-specific database has added utility when compared with already available national ambulatory databases. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  12. Analysis of condensed matter physics records in databases. Science and technology indicators in condensed matter physics

    International Nuclear Information System (INIS)

    Hillebrand, C.D.

    1999-05-01

    An analysis of the literature on Condensed Matter Physics, with particular emphasis on High Temperature Superconductors, was performed on the contents of the bibliographic database International Nuclear Information System (INIS). Quantitative data were obtained on various characteristics of the relevant INIS records such as subject categories, language and country of publication, publication types, etc. The analysis opens up the possibility for further studies, e.g. on international research co-operation and on publication patterns. (author)

  13. GRAFLAB 2.3 for UNIX - A MATLAB database, plotting, and analysis tool: User`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N.

    1998-03-01

    This report is a user`s manual for GRAFLAB, which is a new database, analysis, and plotting package that has been written entirely in the MATLAB programming language. GRAFLAB is currently used for data reduction, analysis, and archival. GRAFLAB was written to replace GRAFAID, which is a FORTRAN database, analysis, and plotting package that runs on VAX/VMS.

  14. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    International Nuclear Information System (INIS)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio; Ariste, A. López

    2013-01-01

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2 4n bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of a given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2 4n as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing

  15. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    Directory of Open Access Journals (Sweden)

    Tomi Kauppi

    2013-01-01

    Full Text Available We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions.

  16. [A SAS marco program for batch processing of univariate Cox regression analysis for great database].

    Science.gov (United States)

    Yang, Rendong; Xiong, Jie; Peng, Yangqin; Peng, Xiaoning; Zeng, Xiaomin

    2015-02-01

    To realize batch processing of univariate Cox regression analysis for great database by SAS marco program. We wrote a SAS macro program, which can filter, integrate, and export P values to Excel by SAS9.2. The program was used for screening survival correlated RNA molecules of ovarian cancer. A SAS marco program could finish the batch processing of univariate Cox regression analysis, the selection and export of the results. The SAS macro program has potential applications in reducing the workload of statistical analysis and providing a basis for batch processing of univariate Cox regression analysis.

  17. Spatiotemporal analysis of changes in lode mining claims around the McDermitt Caldera, northern Nevada and southern Oregon

    Science.gov (United States)

    Coyan, Joshua; Zientek, Michael L.; Mihalasky, Mark J.

    2017-01-01

    Resource managers and agencies involved with planning for future federal land needs are required to complete an assessment of and forecast for future land use every ten years. Predicting mining activities on federal lands is difficult as current regulations do not require disclosure of exploration results. In these cases, historic mining claims may serve as a useful proxy for determining where mining-related activities may occur. We assess the utility of using a space–time cube (STC) and associated analyses to evaluate and characterize mining claim activities around the McDermitt Caldera in northern Nevada and southern Oregon. The most significant advantage of arranging the mining claim data into a STC is the ability to visualize and compare the data, which allows scientists to better understand patterns and results. Additional analyses of the STC (i.e., Trend, Emerging Hot Spot, Hot Spot, and Cluster and Outlier Analyses) provide extra insights into the data and may aid in predicting future mining claim activities.

  18. Analysis of 127 peripartum hypoxic brain injuries from closed claims registered by the Danish Patient Insurance Association

    DEFF Research Database (Denmark)

    Bock, J.; Christoffersen, J.K.; Hedegaard, M.

    2008-01-01

    : The authors retrospectively investigated peripartum hypoxic brain injuries registered by the Danish Patient Insurance Association. RESULTS: From 1992 to 2004, 127 approved claims concerning peripartum hypoxic brain injuries were registered and subsequently analysed. Thirty-eight newborns died, and a majority...

  19. Importance of databases of nucleic acids for bioinformatic analysis focused to genomics

    Science.gov (United States)

    Jimenez-Gutierrez, L. R.; Barrios-Hernández, C. J.; Pedraza-Ferreira, G. R.; Vera-Cala, L.; Martinez-Perez, F.

    2016-08-01

    Recently, bioinformatics has become a new field of science, indispensable in the analysis of millions of nucleic acids sequences, which are currently deposited in international databases (public or private); these databases contain information of genes, RNA, ORF, proteins, intergenic regions, including entire genomes from some species. The analysis of this information requires computer programs; which were renewed in the use of new mathematical methods, and the introduction of the use of artificial intelligence. In addition to the constant creation of supercomputing units trained to withstand the heavy workload of sequence analysis. However, it is still necessary the innovation on platforms that allow genomic analyses, faster and more effectively, with a technological understanding of all biological processes.

  20. International nanotechnology development in 2003: Country, institution, and technology field analysis based on USPTO patent database

    International Nuclear Information System (INIS)

    Huang Zan; Chen Hsinchun; Chen Zhikai; Roco, Mihail C.

    2004-01-01

    Nanoscale science and engineering (NSE) have seen rapid growth and expansion in new areas in recent years. This paper provides an international patent analysis using the U.S. Patent and Trademark Office (USPTO) data searched by keywords of the entire text: title, abstract, claims, and specifications. A fraction of these patents fully satisfy the National Nanotechnology Initiative definition of nanotechnology (which requires exploiting specific phenomena and direct manipulation at the nanoscale), while others only make use of NSE tools and methods of investigation. In previous work we proposed an integrated patent analysis and visualization framework of patent content mapping for the NSE field and of knowledge flow pattern identification until 2002. In this paper, the results are updated for 2003, and the new trends are presented

  1. International nanotechnology development in 2003: Country, institution, and technology field analysis based on USPTO patent database

    Science.gov (United States)

    Huang, Zan; Chen, Hsinchun; Chen, Zhi-kai; Roco, Mihail C.

    2004-08-01

    Nanoscale science and engineering (NSE) have seen rapid growth and expansion in new areas in recent years. This paper provides an international patent analysis using the U.S. Patent and Trademark Office (USPTO) data searched by keywords of the entire text: title, abstract, claims, and specifications. A fraction of these patents fully satisfy the National Nanotechnology Initiative definition of nanotechnology (which requires exploiting specific phenomena and direct manipulation at the nanoscale), while others only make use of NSE tools and methods of investigation. In previous work we proposed an integrated patent analysis and visualization framework of patent content mapping for the NSE field and of knowledge flow pattern identification until 2002. In this paper, the results are updated for 2003, and the new trends are presented.

  2. Neutron cross-sections database for amino acids and proteins analysis

    Energy Technology Data Exchange (ETDEWEB)

    Voi, Dante L.; Ferreira, Francisco de O.; Nunes, Rogerio Chaffin, E-mail: dante@ien.gov.br, E-mail: fferreira@ien.gov.br, E-mail: Chaffin@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Rocha, Helio F. da, E-mail: hrocha@gbl.com.br [Universidade Federal do Rio de Janeiro (IPPMG/UFRJ), Rio de Janeiro, RJ (Brazil). Instituto de Pediatria

    2015-07-01

    Biological materials may be studied using neutrons as an unconventional tool of analysis. Dynamics and structures data can be obtained for amino acids, protein and others cellular components by neutron cross sections determinations especially for applications in nuclear purity and conformation analysis. The instrument used for this is the crystal spectrometer of the Instituto de Engenharia Nuclear (IEN-CNEN-RJ), the only one in Latin America that uses neutrons for this type of analyzes and it is installed in one of the reactor Argonauta irradiation channels. The experimentally values obtained are compared with calculated values using literature data with a rigorous analysis of the chemical composition, conformation and molecular structure analysis of the materials. A neutron cross-section database was constructed to assist in determining molecular dynamic, structure and formulae of biological materials. The database contains neutron cross-sections values of all amino acids, chemical elements, molecular groups, auxiliary radicals, as well as values of constants and parameters necessary for the analysis. An unprecedented analytical procedure was developed using the neutron cross section parceling and grouping method for data manipulation. This database is a result of measurements obtained from twenty amino acids that were provided by different manufactories and are used in oral administration in hospital individuals for nutritional applications. It was also constructed a small data file of compounds with different molecular groups including carbon, nitrogen, sulfur and oxygen, all linked to hydrogen atoms. A review of global and national scene in the acquisition of neutron cross sections data, the formation of libraries and the application of neutrons for analyzing biological materials is presented. This database has further application in protein analysis and the neutron cross-section from the insulin was estimated. (author)

  3. Neutron cross-sections database for amino acids and proteins analysis

    International Nuclear Information System (INIS)

    Voi, Dante L.; Ferreira, Francisco de O.; Nunes, Rogerio Chaffin; Rocha, Helio F. da

    2015-01-01

    Biological materials may be studied using neutrons as an unconventional tool of analysis. Dynamics and structures data can be obtained for amino acids, protein and others cellular components by neutron cross sections determinations especially for applications in nuclear purity and conformation analysis. The instrument used for this is the crystal spectrometer of the Instituto de Engenharia Nuclear (IEN-CNEN-RJ), the only one in Latin America that uses neutrons for this type of analyzes and it is installed in one of the reactor Argonauta irradiation channels. The experimentally values obtained are compared with calculated values using literature data with a rigorous analysis of the chemical composition, conformation and molecular structure analysis of the materials. A neutron cross-section database was constructed to assist in determining molecular dynamic, structure and formulae of biological materials. The database contains neutron cross-sections values of all amino acids, chemical elements, molecular groups, auxiliary radicals, as well as values of constants and parameters necessary for the analysis. An unprecedented analytical procedure was developed using the neutron cross section parceling and grouping method for data manipulation. This database is a result of measurements obtained from twenty amino acids that were provided by different manufactories and are used in oral administration in hospital individuals for nutritional applications. It was also constructed a small data file of compounds with different molecular groups including carbon, nitrogen, sulfur and oxygen, all linked to hydrogen atoms. A review of global and national scene in the acquisition of neutron cross sections data, the formation of libraries and the application of neutrons for analyzing biological materials is presented. This database has further application in protein analysis and the neutron cross-section from the insulin was estimated. (author)

  4. Distribution and characteristics of occupational injuries and diseases among farmers: a retrospective analysis of workers' compensation claims.

    Science.gov (United States)

    Karttunen, Janne P; Rautiainen, Risto H

    2013-08-01

    Research indicates occupational injuries and diseases are not evenly distributed among workers. We investigated the distribution and characteristics of compensated occupational injuries and diseases requiring medical care in the Finnish farming population. The study population consisted of 93,564 Finnish farmers, spouses, and salaried family members who were covered by the mandatory workers' compensation insurance in 2002. This population had a total of 133,207 occupational injuries and 9,148 occupational diseases over a 26-year period (1982-2008). Clustering of claims was observed. Nearly half (47.1%) of the population had no compensated claims while 52.9% had at least one; 50.9% of farmers had one or more injuries and 8.1% had one or more diseases. Ten percent of the population had half of injury cases, and 3% of the population had half of occupational disease cases. Claims frequently involved work tasks related to animal husbandry and repair and maintenance of farm machinery. Injury and disease characteristics (work activity, cause, ICD-10 code) differed between individuals with high and low personal claim rate. Injuries and diseases of the musculoskeletal system had a tendency to reoccur among those with high claim rate. These outcomes were often related to strenuous working motions and postures in labor-intensive animal husbandry. Analyses of longitudinal insurance data contributes to better understanding of the long-term risk of occupational injury and disease among farmers. We suggest focusing on recurrent health outcomes and their causes among high risk populations could help design more effective interventions in agriculture and other industries. Copyright © 2013 Wiley Periodicals, Inc.

  5. Technical note: Analysis of claims and disputes in contracts for oil and gas development projects in Iran with solutions

    Directory of Open Access Journals (Sweden)

    Fathollah Sajedi

    2017-08-01

    Full Text Available Contracts for oil and gas development projects are naturally complex, they are explained with some of maps and technical specifications. To supply the goals of contracts, it is necessary to construct by a team having owner, consulting engineer and contractor. The unique aspects of each project and team working are resulting to disagreements. It should be noted that the majority of team workers have not previously worked together. It may not be expected to forecast all project aspects in design and preparation of tender documents process. However, in some cases it will occur inconsistencies in contract documents and possibly may be disagreements on commentary of the cases which there are in the provisions of the contract. Every root of disagreement resulted in to claim and finally dispute. Lack of foresight and/or existing ambiguous texts in some provisions of contract, not being aware of components of the project to conditions and obligations and rules of contract will complex and sometimes impossible the agreement on implementation problems. Therefore, the claims will be resulted in disputes and inflict financial losses to contractors and/or owners and then the projects will not be completed. In Iran many activities have not been carried out about claims and disputes in different orientations especially in areas futures and hence, it was studied in this research. Firstly, research history was considered and the causes of claims and disputes were identified in process of different levels of oil projects construction from primary to exploitation and then a questionnaire was prepared using the comments of experts. Finally, the questionnaire was analysed by SPSS and the approved factors in creation of claims and disputes and in their roots were ranked.

  6. DianaHealth.com, an On-Line Database Containing Appraisals of the Clinical Value and Appropriateness of Healthcare Interventions: Database Development and Retrospective Analysis.

    Science.gov (United States)

    Bonfill, Xavier; Osorio, Dimelza; Solà, Ivan; Pijoan, Jose Ignacio; Balasso, Valentina; Quintana, Maria Jesús; Puig, Teresa; Bolibar, Ignasi; Urrútia, Gerard; Zamora, Javier; Emparanza, José Ignacio; Gómez de la Cámara, Agustín; Ferreira-González, Ignacio

    2016-01-01

    To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database. Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own. We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be focused on

  7. DianaHealth.com, an On-Line Database Containing Appraisals of the Clinical Value and Appropriateness of Healthcare Interventions: Database Development and Retrospective Analysis.

    Directory of Open Access Journals (Sweden)

    Xavier Bonfill

    Full Text Available To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database.Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own.We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be

  8. GMOMETHODS: the European Union database of reference methods for GMO analysis.

    Science.gov (United States)

    Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre

    2012-01-01

    In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis.

  9. 32 CFR 536.120 - Claims payable as maritime claims.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Claims payable as maritime claims. 536.120... ACCOUNTS CLAIMS AGAINST THE UNITED STATES Maritime Claims § 536.120 Claims payable as maritime claims. A claim is cognizable under this subpart if it arises in or on a maritime location, involves some...

  10. Trends in Compulsory Licensing of Pharmaceuticals Since the Doha Declaration: A Database Analysis

    Science.gov (United States)

    Beall, Reed; Kuhn, Randall

    2012-01-01

    Background It is now a decade since the World Trade Organization (WTO) adopted the “Declaration on the TRIPS Agreement and Public Health” at its 4th Ministerial Conference in Doha. Many anticipated that these actions would lead nations to claim compulsory licenses (CLs) for pharmaceutical products with greater regularity. A CL is the use of a patented innovation that has been licensed by a state without the permission of the patent title holder. Skeptics doubted that many CLs would occur, given political pressure against CL activity and continued health system weakness in poor countries. The subsequent decade has seen little systematic assessment of the Doha Declaration's impact. Methods and Findings We assembled a database of all episodes in which a CL was publically entertained or announced by a WTO member state since 1995. Broad searches of CL activity were conducted using media, academic, and legal databases, yielding 34 potential CL episodes in 26 countries. Country- and product-specific searches were used to verify government participation, resulting in a final database of 24 verified CLs in 17 nations. We coded CL episodes in terms of outcome, national income, and disease group over three distinct periods of CL activity. Most CL episodes occurred between 2003 and 2005, involved drugs for HIV/AIDS, and occurred in upper-middle-income countries (UMICs). Aside from HIV/AIDS, few CL episodes involved communicable disease, and none occurred in least-developed or low-income countries. Conclusions Given skepticism about the Doha Declaration's likely impact, we note the relatively high occurrence of CLs, yet CL activity has diminished markedly since 2006. While UMICs have high CL activity and strong incentives to use CLs compared to other countries, we note considerable countervailing pressures against CL use even in UMICs. We conclude that there is a low probability of continued CL activity. We highlight the need for further systematic evaluation of global

  11. SyncClaimService

    Data.gov (United States)

    Department of Veterans Affairs — Provides various methods to sync Claim related data for NWQ processing. It includes web operations to get Claims, get Unique Contention Classifications, get Unique...

  12. IBO Claim Taking Project

    Data.gov (United States)

    Social Security Administration — IBO manually tracks all Canadian Claims and DSU claims via this report. It also provides a summary for each region and office of origin that the DSU works with. This...

  13. Workers Compensation Claim Data -

    Data.gov (United States)

    Department of Transportation — This data set contains DOT employee workers compensation claim data for current and past DOT employees. Types of data include claim data consisting of PII data (SSN,...

  14. Medicaid Drug Claims Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Drug Claims Statistics CD is a useful tool that conveniently breaks up Medicaid claim counts and separates them by quarter and includes an annual count.

  15. Exploiting automatically generated databases of traffic signs and road markings for contextual co-occurrence analysis

    Science.gov (United States)

    Hazelhoff, Lykele; Creusen, Ivo M.; Woudsma, Thomas; de With, Peter H. N.

    2015-11-01

    Combined databases of road markings and traffic signs provide a complete and full description of the present traffic legislation and instructions. Such databases contribute to efficient signage maintenance, improve navigation, and benefit autonomous driving vehicles. A system is presented for the automated creation of such combined databases, which additionally investigates the benefit of this combination for automated contextual placement analysis. This analysis involves verification of the co-occurrence of traffic signs and road markings to retrieve a list of potentially incorrectly signaled (and thus potentially unsafe) road situations. This co-occurrence verification is specifically explored for both pedestrian crossings and yield situations. Evaluations on 420 km of road have shown that individual detection of traffic signs and road markings denoting these road situations can be performed with accuracies of 98% and 85%, respectively. Combining both approaches shows that over 95% of the pedestrian crossings and give-way situations can be identified. An exploration toward additional co-occurrence analysis of signs and markings shows that inconsistently signaled situations can successfully be extracted, such that specific safety actions can be directed toward cases lacking signs or markings, while most consistently signaled situations can be omitted from this analysis.

  16. About DNA databasing and investigative genetic analysis of externally visible characteristics: A public survey.

    Science.gov (United States)

    Zieger, Martin; Utz, Silvia

    2015-07-01

    During the last decade, DNA profiling and the use of DNA databases have become two of the most employed instruments of police investigations. This very rapid establishment of forensic genetics is yet far from being complete. In the last few years novel types of analyses have been presented to describe phenotypically a possible perpetrator. We conducted the present study among German speaking Swiss residents for two main reasons: firstly, we aimed at getting an impression of the public awareness and acceptance of the Swiss DNA database and the perception of a hypothetical DNA database containing all Swiss residents. Secondly, we wanted to get a broader picture of how people that are not working in the field of forensic genetics think about legal permission to establish phenotypic descriptions of alleged criminals by genetic means. Even though a significant number of study participants did not even know about the existence of the Swiss DNA database, its acceptance appears to be very high. Generally our results suggest that the current forensic use of DNA profiling is considered highly trustworthy. However, the acceptance of a hypothetical universal database would be only as low as about 30% among the 284 respondents to our study, mostly because people are concerned about the security of their genetic data, their privacy or a possible risk of abuse of such a database. Concerning the genetic analysis of externally visible characteristics and biogeographical ancestry, we discover a high degree of acceptance. The acceptance decreases slightly when precise characteristics are presented to the participants in detail. About half of the respondents would be in favor of the moderate use of physical traits analyses only for serious crimes threatening life, health or sexual integrity. The possible risk of discrimination and reinforcement of racism, as discussed by scholars from anthropology, bioethics, law, philosophy and sociology, is mentioned less frequently by the study

  17. Screening of Gas-Cooled Reactor Thermal-Hydraulic and Safety Analysis Tools and Experimental Database

    International Nuclear Information System (INIS)

    Lee, Won Jae; Kim, Min Hwan; Lee, Seung Wook

    2007-08-01

    This report is a final report of I-NERI Project, 'Screening of Gas-cooled Reactor Thermal Hydraulic and Safety Analysis Tools and Experimental Database 'jointly carried out by KAERI, ANL and INL. In this study, we developed the basic technologies required to develop and validate the VHTR TH/safety analysis tools and evaluated the TH/safety database information. The research tasks consist of; 1) code qualification methodology (INL), 2) high-level PIRTs for major nucleus set of events (KAERI, ANL, INL), 3) initial scaling and scoping analysis (ANL, KAERI, INL), 4) filtering of TH/safety tools (KAERI, INL), 5) evaluation of TH/safety database information (KAERI, INL, ANL) and 6) key scoping analysis (KAERI). The code qualification methodology identifies the role of PIRTs in the R and D process and the bottom-up and top-down code validation methods. Since the design of VHTR is still evolving, we generated the high-level PIRTs referencing 600MWth block-type GT-MHR and 400MWth pebble-type PBMR. Nucleus set of events that represents the VHTR safety and operational transients consists of the enveloping scenarios of HPCC (high pressure conduction cooling: loss of primary flow), LPCC/Air-Ingress (low pressure conduction cooling: loss of coolant), LC (load changes: power maneuvering), ATWS (anticipated transients without scram: reactivity insertion), WS (water ingress: water-interfacing system break) and HU (hydrogen-side upset: loss of heat sink). The initial scaling analysis defines dimensionless parameters that need to be reflected in mixed convection modeling and the initial scoping analysis provided the reference system transients used in the PIRTs generation. For the PIRTs phenomena, we evaluated the modeling capability of the candidate TH/safety tools and derived a model improvement need. By surveying and evaluating the TH/safety database information, a tools V and V matrix has been developed. Through the key scoping analysis using available database, the modeling

  18. Screening of Gas-Cooled Reactor Thermal-Hydraulic and Safety Analysis Tools and Experimental Database

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Jae; Kim, Min Hwan; Lee, Seung Wook (and others)

    2007-08-15

    This report is a final report of I-NERI Project, 'Screening of Gas-cooled Reactor Thermal Hydraulic and Safety Analysis Tools and Experimental Database 'jointly carried out by KAERI, ANL and INL. In this study, we developed the basic technologies required to develop and validate the VHTR TH/safety analysis tools and evaluated the TH/safety database information. The research tasks consist of; 1) code qualification methodology (INL), 2) high-level PIRTs for major nucleus set of events (KAERI, ANL, INL), 3) initial scaling and scoping analysis (ANL, KAERI, INL), 4) filtering of TH/safety tools (KAERI, INL), 5) evaluation of TH/safety database information (KAERI, INL, ANL) and 6) key scoping analysis (KAERI). The code qualification methodology identifies the role of PIRTs in the R and D process and the bottom-up and top-down code validation methods. Since the design of VHTR is still evolving, we generated the high-level PIRTs referencing 600MWth block-type GT-MHR and 400MWth pebble-type PBMR. Nucleus set of events that represents the VHTR safety and operational transients consists of the enveloping scenarios of HPCC (high pressure conduction cooling: loss of primary flow), LPCC/Air-Ingress (low pressure conduction cooling: loss of coolant), LC (load changes: power maneuvering), ATWS (anticipated transients without scram: reactivity insertion), WS (water ingress: water-interfacing system break) and HU (hydrogen-side upset: loss of heat sink). The initial scaling analysis defines dimensionless parameters that need to be reflected in mixed convection modeling and the initial scoping analysis provided the reference system transients used in the PIRTs generation. For the PIRTs phenomena, we evaluated the modeling capability of the candidate TH/safety tools and derived a model improvement need. By surveying and evaluating the TH/safety database information, a tools V and V matrix has been developed. Through the key scoping analysis using available database, the

  19. Third research coordination meeting on reference database for neutron activation analysis. Summary report

    International Nuclear Information System (INIS)

    Kellett, M.A.

    2009-12-01

    The third meeting of the Co-ordinated Research Project on 'Reference Database for Neutron Activation Analysis' was held at the IAEA, Vienna from 17-19 November 2008. A summary of presentations made by participants is given, reports on specific tasks and subsequent discussions. With the aim of finalising the work of this CRP and in order to meet initial objectives, outputs were discussed and detailed task assignments agreed upon. (author)

  20. Marijuana use and inpatient outcomes among hospitalized patients: analysis of the nationwide inpatient sample database

    OpenAIRE

    Vin?Raviv, Neomi; Akinyemiju, Tomi; Meng, Qingrui; Sakhuja, Swati; Hayward, Reid

    2016-01-01

    Abstract The purpose of this paper is to examine the relationship between marijuana use and health outcomes among hospitalized patients, including those hospitalized with a diagnosis of cancer. A total of 387,608 current marijuana users were identified based on ICD?9 codes for marijuana use among hospitalized patients in the Nationwide Inpatient Sample database between 2007 and 2011. Logistic regression analysis was performed to determine the association between marijuana use and heart failur...

  1. Archives of Astronomical Spectral Observations and Atomic/Molecular Databases for their Analysis

    Directory of Open Access Journals (Sweden)

    Ryabchikova T.

    2015-12-01

    Full Text Available We present a review of open-source data for stellar spectroscopy investigations. It includes lists of the main archives of medium-to-high resolution spectroscopic observations, with brief characteristics of the archive data (spectral range, resolving power, flux units. We also review atomic and molecular databases that contain parameters of spectral lines, cross-sections and reaction rates needed for a detailed analysis of high resolution, high signal-to-noise ratio stellar spectra.

  2. High-Fidelity Aerothermal Engineering Analysis for Planetary Probes Using DOTNET Framework and OLAP Cubes Database

    Directory of Open Access Journals (Sweden)

    Prabhakar Subrahmanyam

    2009-01-01

    Full Text Available This publication presents the architecture integration and implementation of various modules in Sparta framework. Sparta is a trajectory engine that is hooked to an Online Analytical Processing (OLAP database for Multi-dimensional analysis capability. OLAP is an Online Analytical Processing database that has a comprehensive list of atmospheric entry probes and their vehicle dimensions, trajectory data, aero-thermal data and material properties like Carbon, Silicon and Carbon-Phenolic based Ablators. An approach is presented for dynamic TPS design. OLAP has the capability to run in one simulation several different trajectory conditions and the output is stored back into the database and can be queried for appropriate trajectory type. An OLAP simulation can be setup by spawning individual threads to run for three types of trajectory: Nominal, Undershoot and Overshoot trajectory. Sparta graphical user interface provides capabilities to choose from a list of flight vehicles or enter trajectory and geometry information of a vehicle in design. DOTNET framework acts as a middleware layer between the trajectory engine and the user interface and also between the web user interface and the OLAP database. Trajectory output can be obtained in TecPlot format, Excel output or in a KML (Keyhole Markup Language format. Framework employs an API (application programming interface to convert trajectory data into a formatted KML file that is used by Google Earth for simulating Earth-entry fly-by visualizations.

  3. Effect of rehabilitation on mortality of patients with Guillain-Barre Syndrome: a propensity-matched analysis using nationwide database.

    Science.gov (United States)

    Inokuchi, H; Yasunaga, H; Nakahara, Y; Horiguchi, H; Ogata, N; Fujitani, J; Matsuda, S; Fushimi, K; Haga, N

    2014-08-01

    Rehabilitation for patients with Guillain-Barre Syndrome (GBS) is recommended as it improves the outcome of neurological deficits. Few studies focused on the effect of rehabilitation on mortality of the patients. To investigate the effect of rehabilitation on hospital mortality of patients with GBS using the Japanese Diagnosis Procedure Combination (DPC) nationwide administrative claims database. A retrospective observational cohort study. Hospitals adopting the Japanese DPC system. Patients hospitalized with a diagnosis of GBS between July 2007 and October 2011. Data analyzed included sex, age, Barthel index at admission, use of ventilation, immune therapy, and rehabilitation during hospitalization, comorbidity, hospital volume, type of hospital, and in-hospital death. One-to-one propensity score-matching was used to compare hospital mortality rates within 30- and 90-days after admission in rehabilitation and non-rehabilitation groups. The adjusted odds ratios of rehabilitation to hospital mortality were also estimated. A total of 3835 patients were identified and analyzed. Patients with advancing age, lower Barthel index at admission, comorbidities, ventilation, or immune therapy were more likely to receive rehabilitation during hospitalization. Propensity-matched analysis of 926 pairs showed that the rehabilitation group had lower hospital mortality rates within both 30- and 90-days than the non-rehabilitation group. The adjusted odds ratios of rehabilitation to hospital mortality within 30- and 90-days were 0.14 and 0.23, respectively. After matching patients' background, rehabilitation was associated with lower hospital mortality of patients with GBS. Rehabilitation treatment is essential for patients with GBS to improve their survival.

  4. A BRDF-BPDF database for the analysis of Earth target reflectances

    Science.gov (United States)

    Breon, Francois-Marie; Maignan, Fabienne

    2017-01-01

    Land surface reflectance is not isotropic. It varies with the observation geometry that is defined by the sun, view zenith angles, and the relative azimuth. In addition, the reflectance is linearly polarized. The reflectance anisotropy is quantified by the bidirectional reflectance distribution function (BRDF), while its polarization properties are defined by the bidirectional polarization distribution function (BPDF). The POLDER radiometer that flew onboard the PARASOL microsatellite remains the only space instrument that measured numerous samples of the BRDF and BPDF of Earth targets. Here, we describe a database of representative BRDFs and BPDFs derived from the POLDER measurements. From the huge number of data acquired by the spaceborne instrument over a period of 7 years, we selected a set of targets with high-quality observations. The selection aimed for a large number of observations, free of significant cloud or aerosol contamination, acquired in diverse observation geometries with a focus on the backscatter direction that shows the specific hot spot signature. The targets are sorted according to the 16-class International Geosphere-Biosphere Programme (IGBP) land cover classification system, and the target selection aims at a spatial representativeness within the class. The database thus provides a set of high-quality BRDF and BPDF samples that can be used to assess the typical variability of natural surface reflectances or to evaluate models. It is available freely from the PANGAEA website (PANGAEA.864090" target="_blank">doi:10.1594/PANGAEA.864090). In addition to the database, we provide a visualization and analysis tool based on the Interactive Data Language (IDL). It allows an interactive analysis of the measurements and a comparison against various BRDF and BPDF analytical models. The present paper describes the input data, the selection principles, the database format, and the analysis tool

  5. Software and Database Usage on Metabolomic Studies: Using XCMS on LC-MS Data Analysis

    Directory of Open Access Journals (Sweden)

    Mustafa Celebier

    2014-04-01

    Full Text Available Metabolome is the complete set of small-molecule metabolites to be found in a cell or a single organism. Metabolomics is the scientific study to determine and identify the chemicals in metabolome with advanced analytical techniques. Nowadays, the elucidation of the molecular mechanism of any disease with genome analysis and proteome analysis is not sufficient. Instead of these, a holistic assessment including metabolomic studies provides rational and accurate results. Metabolite levels in an organism are associated with the cellular functions. Thus, determination of the metabolite amounts identifies the phenotype of a cell or tissue related with the genetic and some other variations. Even though, the analysis of metabolites for medical diagnosis and therapy have been performed for a long time, the studies to improve the analysis methods for metabolite profiling are recently increased. The application of metabolomics includes the identification of biomarkers, enzyme-substract interactions, drug-activity studies, metabolic pathway analysis and some other studies related with the system biology. The preprocessing and computing of the data obtained from LC-MS, GC-MS, CE-MS and NMR for metabolite profiling are helpful for preventing from time consuming manual data analysis processes and possible random errors on profiling period. In addition, such preprocesses allow us to identify low amount of metabolites which are not possible to be analyzed by manual processing. Therefore, the usage of software and databases for this purpose could not be ignored. In this study, it is briefly presented the software and database used on metabolomics and it is evaluated the capability of these software on metabolite profiling. Particularly, the performance of one of the most popular software called XCMS on the evaluation of LC-MS results for metabolomics was overviewed. In the near future, metabolomics with software and database support is estimated to be a routine

  6. Analysis of newly established EST databases reveals similarities between heart regeneration in newt and fish

    Directory of Open Access Journals (Sweden)

    Weis Patrick

    2010-01-01

    Full Text Available Abstract Background The newt Notophthalmus viridescens possesses the remarkable ability to respond to cardiac damage by formation of new myocardial tissue. Surprisingly little is known about changes in gene activities that occur during the course of regeneration. To begin to decipher the molecular processes, that underlie restoration of functional cardiac tissue, we generated an EST database from regenerating newt hearts and compared the transcriptional profile of selected candidates with genes deregulated during zebrafish heart regeneration. Results A cDNA library of 100,000 cDNA clones was generated from newt hearts 14 days after ventricular injury. Sequencing of 11520 cDNA clones resulted in 2894 assembled contigs. BLAST searches revealed 1695 sequences with potential homology to sequences from the NCBI database. BLAST searches to TrEMBL and Swiss-Prot databases assigned 1116 proteins to Gene Ontology terms. We also identified a relatively large set of 174 ORFs, which are likely to be unique for urodele amphibians. Expression analysis of newt-zebrafish homologues confirmed the deregulation of selected genes during heart regeneration. Sequences, BLAST results and GO annotations were visualized in a relational web based database followed by grouping of identified proteins into clusters of GO Terms. Comparison of data from regenerating zebrafish hearts identified biological processes, which were uniformly overrepresented during cardiac regeneration in newt and zebrafish. Conclusion We concluded that heart regeneration in newts and zebrafish led to the activation of similar sets of genes, which suggests that heart regeneration in both species might follow similar principles. The design of the newly established newt EST database allows identification of molecular pathways important for heart regeneration.

  7. GenoMycDB: a database for comparative analysis of mycobacterial genes and genomes.

    Science.gov (United States)

    Catanho, Marcos; Mascarenhas, Daniel; Degrave, Wim; Miranda, Antonio Basílio de

    2006-03-31

    Several databases and computational tools have been created with the aim of organizing, integrating and analyzing the wealth of information generated by large-scale sequencing projects of mycobacterial genomes and those of other organisms. However, with very few exceptions, these databases and tools do not allow for massive and/or dynamic comparison of these data. GenoMycDB (http://www.dbbm.fiocruz.br/GenoMycDB) is a relational database built for large-scale comparative analyses of completely sequenced mycobacterial genomes, based on their predicted protein content. Its central structure is composed of the results obtained after pair-wise sequence alignments among all the predicted proteins coded by the genomes of six mycobacteria: Mycobacterium tuberculosis (strains H37Rv and CDC1551), M. bovis AF2122/97, M. avium subsp. paratuberculosis K10, M. leprae TN, and M. smegmatis MC2 155. The database stores the computed similarity parameters of every aligned pair, providing for each protein sequence the predicted subcellular localization, the assigned cluster of orthologous groups, the features of the corresponding gene, and links to several important databases. Tables containing pairs or groups of potential homologs between selected species/strains can be produced dynamically by user-defined criteria, based on one or multiple sequence similarity parameters. In addition, searches can be restricted according to the predicted subcellular localization of the protein, the DNA strand of the corresponding gene and/or the description of the protein. Massive data search and/or retrieval are available, and different ways of exporting the result are offered. GenoMycDB provides an on-line resource for the functional classification of mycobacterial proteins as well as for the analysis of genome structure, organization, and evolution.

  8. BIGNASim: a NoSQL database structure and analysis portal for nucleic acids simulation data

    Science.gov (United States)

    Hospital, Adam; Andrio, Pau; Cugnasco, Cesare; Codo, Laia; Becerra, Yolanda; Dans, Pablo D.; Battistini, Federica; Torres, Jordi; Goñi, Ramón; Orozco, Modesto; Gelpí, Josep Ll.

    2016-01-01

    Molecular dynamics simulation (MD) is, just behind genomics, the bioinformatics tool that generates the largest amounts of data, and that is using the largest amount of CPU time in supercomputing centres. MD trajectories are obtained after months of calculations, analysed in situ, and in practice forgotten. Several projects to generate stable trajectory databases have been developed for proteins, but no equivalence exists in the nucleic acids world. We present here a novel database system to store MD trajectories and analyses of nucleic acids. The initial data set available consists mainly of the benchmark of the new molecular dynamics force-field, parmBSC1. It contains 156 simulations, with over 120 μs of total simulation time. A deposition protocol is available to accept the submission of new trajectory data. The database is based on the combination of two NoSQL engines, Cassandra for storing trajectories and MongoDB to store analysis results and simulation metadata. The analyses available include backbone geometries, helical analysis, NMR observables and a variety of mechanical analyses. Individual trajectories and combined meta-trajectories can be downloaded from the portal. The system is accessible through http://mmb.irbbarcelona.org/BIGNASim/. Supplementary Material is also available on-line at http://mmb.irbbarcelona.org/BIGNASim/SuppMaterial/. PMID:26612862

  9. The phytophthora genome initiative database: informatics and analysis for distributed pathogenomic research.

    Science.gov (United States)

    Waugh, M; Hraber, P; Weller, J; Wu, Y; Chen, G; Inman, J; Kiphart, D; Sobral, B

    2000-01-01

    The Phytophthora Genome Initiative (PGI) is a distributed collaboration to study the genome and evolution of a particularly destructive group of plant pathogenic oomycete, with the goal of understanding the mechanisms of infection and resistance. NCGR provides informatics support for the collaboration as well as a centralized data repository. In the pilot phase of the project, several investigators prepared Phytophthora infestans and Phytophthora sojae EST and Phytophthora sojae BAC libraries and sent them to another laboratory for sequencing. Data from sequencing reactions were transferred to NCGR for analysis and curation. An analysis pipeline transforms raw data by performing simple analyses (i.e., vector removal and similarity searching) that are stored and can be retrieved by investigators using a web browser. Here we describe the database and access tools, provide an overview of the data therein and outline future plans. This resource has provided a unique opportunity for the distributed, collaborative study of a genus from which relatively little sequence data are available. Results may lead to insight into how better to control these pathogens. The homepage of PGI can be accessed at http:www.ncgr.org/pgi, with database access through the database access hyperlink.

  10. BIGNASim: a NoSQL database structure and analysis portal for nucleic acids simulation data.

    Science.gov (United States)

    Hospital, Adam; Andrio, Pau; Cugnasco, Cesare; Codo, Laia; Becerra, Yolanda; Dans, Pablo D; Battistini, Federica; Torres, Jordi; Goñi, Ramón; Orozco, Modesto; Gelpí, Josep Ll

    2016-01-04

    Molecular dynamics simulation (MD) is, just behind genomics, the bioinformatics tool that generates the largest amounts of data, and that is using the largest amount of CPU time in supercomputing centres. MD trajectories are obtained after months of calculations, analysed in situ, and in practice forgotten. Several projects to generate stable trajectory databases have been developed for proteins, but no equivalence exists in the nucleic acids world. We present here a novel database system to store MD trajectories and analyses of nucleic acids. The initial data set available consists mainly of the benchmark of the new molecular dynamics force-field, parmBSC1. It contains 156 simulations, with over 120 μs of total simulation time. A deposition protocol is available to accept the submission of new trajectory data. The database is based on the combination of two NoSQL engines, Cassandra for storing trajectories and MongoDB to store analysis results and simulation metadata. The analyses available include backbone geometries, helical analysis, NMR observables and a variety of mechanical analyses. Individual trajectories and combined meta-trajectories can be downloaded from the portal. The system is accessible through http://mmb.irbbarcelona.org/BIGNASim/. Supplementary Material is also available on-line at http://mmb.irbbarcelona.org/BIGNASim/SuppMaterial/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Grid Databases for Shared Image Analysis in the MammoGrid Project

    CERN Document Server

    Amendolia, S R; Hauer, T; Manset, D; McClatchey, R; Odeh, M; Reading, T; Rogulin, D; Schottlander, D; Solomonides, T

    2004-01-01

    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UK

  12. A reference methylome database and analysis pipeline to facilitate integrative and comparative epigenomics.

    Directory of Open Access Journals (Sweden)

    Qiang Song

    Full Text Available DNA methylation is implicated in a surprising diversity of regulatory, evolutionary processes and diseases in eukaryotes. The introduction of whole-genome bisulfite sequencing has enabled the study of DNA methylation at a single-base resolution, revealing many new aspects of DNA methylation and highlighting the usefulness of methylome data in understanding a variety of genomic phenomena. As the number of publicly available whole-genome bisulfite sequencing studies reaches into the hundreds, reliable and convenient tools for comparing and analyzing methylomes become increasingly important. We present MethPipe, a pipeline for both low and high-level methylome analysis, and MethBase, an accompanying database of annotated methylomes from the public domain. Together these resources enable researchers to extract interesting features from methylomes and compare them with those identified in public methylomes in our database.

  13. Analysis of a global energy confinement database for JET ohmic plasmas

    International Nuclear Information System (INIS)

    Bracco, G.; Thomsen, K.

    1997-01-01

    A database containing global energy confinement data for JET ohmic plasmas in the campaigns from 1984 to 1992 has been established. An analysis is presented of this database and the results are compared with data from other tokamaks, such as the Axially Symmetric Divertor Experiment (ASDEX), Frascati Tokamak Upgrade (FTU) and Tore Supra. The trends of JET ohmic confinement appear to be similar to those observed on other tokamaks: a linear dependence of the global energy confinement time on density is observed up to a density value where a saturation is attained; this density value defines the border between the linear and the saturated ohmic confinement regimes; this border is shifted towards higher density values if the q value of the discharge is decreased; the global confinement time in the saturated ohmic regime increases less than linearly with the value of the magnetic field. (author). 20 refs, 13 figs, 4 tabs

  14. Analysis of high accuracy, quantitative proteomics data in the MaxQB database.

    Science.gov (United States)

    Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias

    2012-03-01

    MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database

  15. An integrated data-analysis and database system for AMS {sup 14}C

    Energy Technology Data Exchange (ETDEWEB)

    Kjeldsen, Henrik, E-mail: kjeldsen@phys.au.d [AMS 14C Dating Centre, Department of Physics and Astronomy, Aarhus University, Aarhus (Denmark); Olsen, Jesper [Department of Earth Sciences, Aarhus University, Aarhus (Denmark); Heinemeier, Jan [AMS 14C Dating Centre, Department of Physics and Astronomy, Aarhus University, Aarhus (Denmark)

    2010-04-15

    AMSdata is the name of a combined database and data-analysis system for AMS {sup 14}C and stable-isotope work that has been developed at Aarhus University. The system (1) contains routines for data analysis of AMS and MS data, (2) allows a flexible and accurate description of sample extraction and pretreatment, also when samples are split into several fractions, and (3) keeps track of all measured, calculated and attributed data. The structure of the database is flexible and allows an unlimited number of measurement and pretreatment procedures. The AMS {sup 14}C data analysis routine is fairly advanced and flexible, and it can be easily optimized for different kinds of measuring processes. Technically, the system is based on a Microsoft SQL server and includes stored SQL procedures for the data analysis. Microsoft Office Access is used for the (graphical) user interface, and in addition Excel, Word and Origin are exploited for input and output of data, e.g. for plotting data during data analysis.

  16. An integrated data-analysis and database system for AMS 14C

    International Nuclear Information System (INIS)

    Kjeldsen, Henrik; Olsen, Jesper; Heinemeier, Jan

    2010-01-01

    AMSdata is the name of a combined database and data-analysis system for AMS 14 C and stable-isotope work that has been developed at Aarhus University. The system (1) contains routines for data analysis of AMS and MS data, (2) allows a flexible and accurate description of sample extraction and pretreatment, also when samples are split into several fractions, and (3) keeps track of all measured, calculated and attributed data. The structure of the database is flexible and allows an unlimited number of measurement and pretreatment procedures. The AMS 14 C data analysis routine is fairly advanced and flexible, and it can be easily optimized for different kinds of measuring processes. Technically, the system is based on a Microsoft SQL server and includes stored SQL procedures for the data analysis. Microsoft Office Access is used for the (graphical) user interface, and in addition Excel, Word and Origin are exploited for input and output of data, e.g. for plotting data during data analysis.

  17. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    Science.gov (United States)

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  18. The mining of toxin-like polypeptides from EST database by single residue distribution analysis.

    Science.gov (United States)

    Kozlov, Sergey; Grishin, Eugene

    2011-01-31

    Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  19. The mining of toxin-like polypeptides from EST database by single residue distribution analysis

    Directory of Open Access Journals (Sweden)

    Grishin Eugene

    2011-01-01

    Full Text Available Abstract Background Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Results Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. Conclusions The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.

  20. Secure and robust cloud computing for high-throughput forensic microsatellite sequence analysis and databasing.

    Science.gov (United States)

    Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A

    2017-11-01

    Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    Science.gov (United States)

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  2. Towards cloud-centric distributed database evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  3. Towards Cloud-centric Distributed Database Evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  4. Regular Benzodiazepine and Z-Substance Use and Risk of Dementia: An Analysis of German Claims Data.

    Science.gov (United States)

    Gomm, Willy; von Holt, Klaus; Thomé, Friederike; Broich, Karl; Maier, Wolfgang; Weckbecker, Klaus; Fink, Anne; Doblhammer, Gabriele; Haenisch, Britta

    2016-09-06

    While acute detrimental effects of benzodiazepine (BDZ), and BDZ and related z-substance (BDZR) use on cognition and memory are known, the association of BDZR use and risk of dementia in the elderly is controversially discussed. Previous studies on cohort or claims data mostly show an increased risk for dementia with the use of BDZs or BDZRs. For Germany, analyses on large population-based data sets are missing. To evaluate the association between regular BDZR use and incident any dementia in a large German claims data set. Using longitudinal German public health insurance data from 2004 to 2011 we analyzed the association between regular BDZR use (versus no BDZR use) and incident dementia in a case-control design. We examined patient samples aged≥60 years that were free of dementia at baseline. To address potential protopathic bias we introduced a lag time between BDZR prescription and dementia diagnosis. Odds ratios were calculated applying conditional logistic regression, adjusted for potential confounding factors such as comorbidities and polypharmacy. The regular use of BDZRs was associated with a significant increased risk of incident dementia for patients aged≥60 years (adjusted odds ratio [OR] 1.21, 95% confidence interval [CI] 1.13-1.29). The association was slightly stronger for long-acting substances than for short-acting ones. A trend for increased risk for dementia with higher exposure was observed. The restricted use of BDZRs may contribute to dementia prevention in the elderly.

  5. Predictors of latent tuberculosis infection treatment completion in the US private sector: an analysis of administrative claims data.

    Science.gov (United States)

    Stockbridge, Erica L; Miller, Thaddeus L; Carlson, Erin K; Ho, Christine

    2018-05-29

    Factors that affect latent tuberculosis infection (LTBI) treatment completion in the US have not been well studied beyond public health settings. This gap was highlighted by recent health insurance-related regulatory changes that are likely to increase LTBI treatment by private sector healthcare providers. We analyzed LTBI treatment completion in the private healthcare setting to facilitate planning around this important opportunity for tuberculosis (TB) control in the US. We analyzed a national sample of commercial insurance medical and pharmacy claims data for people ages 0 to 64 years who initiated daily dose isoniazid treatment between July 2011 and March 2014 and who had complete data. All individuals resided in the US. Factors associated with treatment completion were examined using multivariable generalized ordered logit models and bivariate Kruskal-Wallis tests or Spearman correlations. We identified 1072 individuals with complete data who initiated isoniazid LTBI treatment. Treatment completion was significantly associated with less restrictive health insurance, age Private sector healthcare claims data provide insights into LTBI treatment completion patterns and patient/provider behaviors. Such information is critical to understanding the opportunities and limitations of private healthcare in the US to support treatment completion as this sector's role in protecting against and eliminating TB grows.

  6. Laser Therapy and Dementia: A Database Analysis and Future Aspects on LED-Based Systems

    Directory of Open Access Journals (Sweden)

    Daniela Litscher

    2014-01-01

    Full Text Available Mainly because of the movement in the age pyramid, one can assume that the incidence of Alzheimer’s disease or dementia in general will increase in the coming decades. This paper employs a database analysis to examine the profile of publication activity related to this topic. Two databases were searched: PubMed and Cochrane Library. About 600 papers related to the research area “dementia and laser” and about 450 papers related to the search terms “Alzheimer and laser” were found in these two most commonly used databases. Ten plus one papers are described in detail and are discussed in the context of the laser research performed at the Medical University of Graz. First results concerning the measurement of the transmission factor (TF through the human skull of a new LED- (light emitting diode- based system are presented (TF = 0.0434 ± 0.0104 (SD. The measurements show that this LED system (using the QIT (quantum optical induced transparency effect might be used in the treatment of dementia.

  7. On the advancement of highly cited research in China: An analysis of the Highly Cited database.

    Science.gov (United States)

    Li, John Tianci

    2018-01-01

    This study investigates the progress of highly cited research in China from 2001 to 2016 through the analysis of the Highly Cited database. The Highly Cited database, compiled by Clarivate Analytics, is comprised of the world's most influential researchers in the 22 Essential Science Indicator fields as catalogued by the Web of Science. The database is considered an international standard for the measurement of national and institutional highly cited research output. Overall, we found a consistent and substantial increase in Highly Cited Researchers from China during the timespan. The Chinese institutions with the most Highly Cited Researchers- the Chinese Academy of Sciences, Tsinghua University, Peking University, Zhejiang University, the University of Science and Technology of China, and BGI Shenzhen- are all top ten universities or primary government research institutions. Further evaluation of separate fields of research and government funding data from the National Natural Science Foundation of China revealed disproportionate growth efficiencies among the separate divisions of the National Natural Science Foundation. The most development occurred in the fields of Chemistry, Materials Sciences, and Engineering, whereas the least development occurred in Economics and Business, Health Sciences, and Life Sciences.

  8. Clinical characteristics and outcomes of myxedema coma: Analysis of a national inpatient database in Japan

    Directory of Open Access Journals (Sweden)

    Yosuke Ono

    2017-04-01

    Full Text Available Background: Myxedema coma is a life-threatening and emergency presentation of hypothyroidism. However, the clinical features and outcomes of this condition have been poorly defined because of its rarity. Methods: We conducted a retrospective observational study of patients diagnosed with myxedema coma from July 2010 through March 2013 using a national inpatient database in Japan. We investigated characteristics, comorbidities, treatments, and in-hospital mortality of patients with myxedema coma. Results: We identified 149 patients diagnosed with myxedema coma out of approximately 19 million inpatients in the database. The mean (standard deviation age was 77 (12 years, and two-thirds of the patients were female. The overall proportion of in-hospital mortality among cases was 29.5%. The number of patients was highest in the winter season. Patients treated with steroids, catecholamines, or mechanical ventilation showed higher in-hospital mortality than those without. Variations in type and dosage of thyroid hormone replacement were not associated with in-hospital mortality. The most common comorbidity was cardiovascular diseases (40.3%. The estimated incidence of myxedema coma was 1.08 per million people per year in Japan. Multivariable logistic regression analysis revealed that higher age and use of catecholamines (with or without steroids were significantly associated with higher in-hospital mortality. Conclusions: The present study identified the clinical characteristics and outcomes of patients with myxedema coma using a large-scale database. Myxedema coma mortality was independently associated with age and severe conditions requiring treatment with catecholamines.

  9. Analysis and databasing software for integrated tomographic gamma scanner (TGS) and passive-active neutron (PAN) assay systems

    International Nuclear Information System (INIS)

    Estep, R.J.; Melton, S.G.; Buenafe, C.

    2000-01-01

    The CTEN-FIT program, written for Windows 9x/NT in C++,performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplify record keeping tasks

  10. LCI Databases Sensitivity Analysis of the Environmental Impact of the Injection Molding Process

    Directory of Open Access Journals (Sweden)

    Ana Elduque

    2015-03-01

    Full Text Available During the last decades, society’s concern for the environment has increased. Specific tools like the Life Cycle Assessment (LCA, and software and databases to apply this method have been developed to calculate the environmental burden of products or processes. Calculating the environmental impact of plastic products is relevant as the global plastics production rose to 288 million tons in 2012. Among the different ways of processing plastics, the injection molding process is one of the most used in the industry worldwide. In this paper, a sensitivity analysis of the environmental impact of the injection molding process has been carried out. In order to perform this study, the EcoInvent database inventory for injection molding, and the data from which this database is created, have been studied. Generally, when an LCA of a product is carried out, databases such as EcoInvent, where materials, processes and transports are characterized providing average values, are used to quantify the environmental impact. This approach can be good enough in some cases but in order to assess a specific production process, like injection molding, a further level of detail is needed. This study shows how the final results of environmental impact differ for injection molding when using the PVC’s, PP’s or PET’s data. This aspect suggests the necessity of studying, in a more precise way, this process, to correctly evaluate its environmental burden. This also allows us to identify priority areas and thereby actions to develop a more sustainable way of manufacturing plastics.

  11. Soil Carbon Variability and Change Detection in the Forest Inventory Analysis Database of the United States

    Science.gov (United States)

    Wu, A. M.; Nater, E. A.; Dalzell, B. J.; Perry, C. H.

    2014-12-01

    The USDA Forest Service's Forest Inventory Analysis (FIA) program is a national effort assessing current forest resources to ensure sustainable management practices, to assist planning activities, and to report critical status and trends. For example, estimates of carbon stocks and stock change in FIA are reported as the official United States submission to the United Nations Framework Convention on Climate Change. While the main effort in FIA has been focused on aboveground biomass, soil is a critical component of this system. FIA sampled forest soils in the early 2000s and has remeasurement now underway. However, soil sampling is repeated on a 10-year interval (or longer), and it is uncertain what magnitude of changes in soil organic carbon (SOC) may be detectable with the current sampling protocol. We aim to identify the sensitivity and variability of SOC in the FIA database, and to determine the amount of SOC change that can be detected with the current sampling scheme. For this analysis, we attempt to answer the following questions: 1) What is the sensitivity (power) of SOC data in the current FIA database? 2) How does the minimum detectable change in forest SOC respond to changes in sampling intervals and/or sample point density? Soil samples in the FIA database represent 0-10 cm and 10-20 cm depth increments with a 10-year sampling interval. We are investigating the variability of SOC and its change over time for composite soil data in each FIA region (Pacific Northwest, Interior West, Northern, and Southern). To guide future sampling efforts, we are employing statistical power analysis to examine the minimum detectable change in SOC storage. We are also investigating the sensitivity of SOC storage changes under various scenarios of sample size and/or sample frequency. This research will inform the design of future FIA soil sampling schemes and improve the information available to international policy makers, university and industry partners, and the public.

  12. Meta-analysis of pulsed-field gel electrophoresis fingerprints based on a constructed Salmonella database.

    Directory of Open Access Journals (Sweden)

    Wen Zou

    Full Text Available A database was constructed consisting of 45,923 Salmonella pulsed-field gel electrophoresis (PFGE patterns. The patterns, randomly selected from all submissions to CDC PulseNet during 2005 to 2010, included the 20 most frequent serotypes and 12 less frequent serotypes. Meta-analysis was applied to all of the PFGE patterns in the database. In the range of 20 to 1100 kb, serotype Enteritidis averaged the fewest bands at 12 bands and Paratyphi A the most with 19, with most serotypes in the 13-15 range among the 32 serptypes. The 10 most frequent bands for each of the 32 serotypes were sorted and distinguished, and the results were in concordance with those from distance matrix and two-way hierarchical cluster analyses of the patterns in the database. The hierarchical cluster analysis divided the 32 serotypes into three major groups according to dissimilarity measures, and revealed for the first time the similarities among the PFGE patterns of serotype Saintpaul to serotypes Typhimurium, Typhimurium var. 5-, and I 4,[5],12:i:-; of serotype Hadar to serotype Infantis; and of serotype Muenchen to serotype Newport. The results of the meta-analysis indicated that the pattern similarities/dissimilarities determined the serotype discrimination of PFGE method, and that the possible PFGE markers may have utility for serotype identification. The presence of distinct, serotype specific patterns may provide useful information to aid in the distribution of serotypes in the population and potentially reduce the need for laborious analyses, such as traditional serotyping.

  13. Data-base system for northern Midwest regional aquifer-system analysis

    Science.gov (United States)

    Kontis, A.L.; Mandle, Richard J.

    1980-01-01

    The U.S. Geological Survey is conducting a study of the Cambrian and Ordovician aquifer system of the northern Midwest as part of a national series of Regional Aquifer-Systems Analysis (RASA). An integral part of this study will be a simulation of the ground-water flow regime using the Geological Survey's three-dimensional finite-difference model. The first step in the modeling effort is the design and development of a systematic set of processes to facilitate the collection, evaluation, manipulation, and use of large quantities of information. A computerized data-base system to accomplish these goals has been completed for the northern Midwest RASA.

  14. The Government Finance Database: A Common Resource for Quantitative Research in Public Financial Analysis.

    Science.gov (United States)

    Pierson, Kawika; Hand, Michael L; Thompson, Fred

    2015-01-01

    Quantitative public financial management research focused on local governments is limited by the absence of a common database for empirical analysis. While the U.S. Census Bureau distributes government finance data that some scholars have utilized, the arduous process of collecting, interpreting, and organizing the data has led its adoption to be prohibitive and inconsistent. In this article we offer a single, coherent resource that contains all of the government financial data from 1967-2012, uses easy to understand natural-language variable names, and will be extended when new data is available.

  15. RETINOBASE: a web database, data mining and analysis platform for gene expression data on retina

    Directory of Open Access Journals (Sweden)

    Léveillard Thierry

    2008-05-01

    Full Text Available Abstract Background The retina is a multi-layered sensory tissue that lines the back of the eye and acts at the interface of input light and visual perception. Its main function is to capture photons and convert them into electrical impulses that travel along the optic nerve to the brain where they are turned into images. It consists of neurons, nourishing blood vessels and different cell types, of which neural cells predominate. Defects in any of these cells can lead to a variety of retinal diseases, including age-related macular degeneration, retinitis pigmentosa, Leber congenital amaurosis and glaucoma. Recent progress in genomics and microarray technology provides extensive opportunities to examine alterations in retinal gene expression profiles during development and diseases. However, there is no specific database that deals with retinal gene expression profiling. In this context we have built RETINOBASE, a dedicated microarray database for retina. Description RETINOBASE is a microarray relational database, analysis and visualization system that allows simple yet powerful queries to retrieve information about gene expression in retina. It provides access to gene expression meta-data and offers significant insights into gene networks in retina, resulting in better hypothesis framing for biological problems that can subsequently be tested in the laboratory. Public and proprietary data are automatically analyzed with 3 distinct methods, RMA, dChip and MAS5, then clustered using 2 different K-means and 1 mixture models method. Thus, RETINOBASE provides a framework to compare these methods and to optimize the retinal data analysis. RETINOBASE has three different modules, "Gene Information", "Raw Data System Analysis" and "Fold change system Analysis" that are interconnected in a relational schema, allowing efficient retrieval and cross comparison of data. Currently, RETINOBASE contains datasets from 28 different microarray experiments performed

  16. TranslatomeDB: a comprehensive database and cloud-based analysis platform for translatome sequencing data.

    Science.gov (United States)

    Liu, Wanting; Xiang, Lunping; Zheng, Tingkai; Jin, Jingjie; Zhang, Gong

    2018-01-04

    Translation is a key regulatory step, linking transcriptome and proteome. Two major methods of translatome investigations are RNC-seq (sequencing of translating mRNA) and Ribo-seq (ribosome profiling). To facilitate the investigation of translation, we built a comprehensive database TranslatomeDB (http://www.translatomedb.net/) which provides collection and integrated analysis of published and user-generated translatome sequencing data. The current version includes 2453 Ribo-seq, 10 RNC-seq and their 1394 corresponding mRNA-seq datasets in 13 species. The database emphasizes the analysis functions in addition to the dataset collections. Differential gene expression (DGE) analysis can be performed between any two datasets of same species and type, both on transcriptome and translatome levels. The translation indices translation ratios, elongation velocity index and translational efficiency can be calculated to quantitatively evaluate translational initiation efficiency and elongation velocity, respectively. All datasets were analyzed using a unified, robust, accurate and experimentally-verifiable pipeline based on the FANSe3 mapping algorithm and edgeR for DGE analyzes. TranslatomeDB also allows users to upload their own datasets and utilize the identical unified pipeline to analyze their data. We believe that our TranslatomeDB is a comprehensive platform and knowledgebase on translatome and proteome research, releasing the biologists from complex searching, analyzing and comparing huge sequencing data without needing local computational power. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. International patent analysis of water source heat pump based on orbit database

    Science.gov (United States)

    Li, Na

    2018-02-01

    Using orbit database, this paper analysed the international patents of water source heat pump (WSHP) industry with patent analysis methods such as analysis of publication tendency, geographical distribution, technology leaders and top assignees. It is found that the beginning of the 21st century is a period of rapid growth of the patent application of WSHP. Germany and the United States had done researches and development of WSHP in an early time, but now Japan and China have become important countries of patent applications. China has been developing faster and faster in recent years, but the patents are concentrated in universities and urgent to be transferred. Through an objective analysis, this paper aims to provide appropriate decision references for the development of domestic WSHP industry.

  18. Direct-to-consumer advertising for bleeding disorders: a content analysis and expert evaluation of advertising claims

    Science.gov (United States)

    Abel, G. A.; Neufeld, E. J.; Sorel, M; Weeks, J. C.

    2009-01-01

    OBJECTIVE In the United States, the Food and Drug Administration (FDA) requires that all direct-to-consumer advertising (DTCA) contain both an accurate statement of a medication’s effects (“truth”) and an even-handed discussion of its benefits and risks/adverse effects (“fair balance”). DTCA for medications to treat rare diseases such as bleeding disorders is unlikely to be given high priority for FDA review. METHODS We reviewed all DTCA for bleeding disorder products appearing in the patient-directed magazine HemeAware from January, 2004 to June, 2006. We categorized the information presented in each advertisement as benefit, risk/adverse effect, or neither, and assessed the amount of text and type size devoted to each. We also assessed the readability of each type of text using the Flesch Reading Ease Score (FRES, where a score of ≥ 65 is considered of average readability), and assessed the accuracy of the advertising claims utilizing a panel of five bleeding disorder experts. RESULTS A total of 39 unique advertisements for 12 products were found. On average, approximately twice the amount of text was devoted to benefits as compared to risks/adverse effects, and the later was more difficult to read (FRES of 20.45 for risks/adverse effects versus 32.08 for benefits; difference of 11.56 [95% CI: 4.52, 18.60]). Only about two-thirds of the advertising claims were considered by a majority of the experts to be based on at least low-quality evidence. CONCLUSION As measured by our methods, print DTCA for bleeding disorders may not reach the FDA’s standards of truth and fair balance. PMID:18647231

  19. [Prescribing valproate to girls and women of childbearing age in Germany : Analysis of trends based on claims data].

    Science.gov (United States)

    Wentzell, Nadine; Haug, Ulrike; Schink, Tania; Engel, Susanne; Liebentraut, Judith; Linder, Roland; Onken, Marlies; Schaefer, Christof; Dathe, Katarina

    2018-06-19

    Measures to raise awareness of the teratogenic potential of valproate and restrict its use in girls/women of childbearing age have been intensified. For Germany, the impact of these measures on valproate prescription rates remains unknown. Trends in prescribing valproate, the underlying treatment indication, and the specialty of the prescribing physician are analyzed. With claims data from several statutory health insurance providers from 2004 to 2016 (approximately 3.5 million insured persons per year) considering treatment indication and medical specialties of prescribing physicians, we assessed the rate of girls/women (12 to 50 years) with at least one valproate dispensation per year. The age-standardized rate of girls/women with at least one valproate dispensation declined by 28% between 2004 and 2016 (2.91/1000 vs. 2.09/1000). For 2015, the indications were epilepsy (66.9%), bipolar disorder (13.6%), migraine/headache (5.6%), schizoaffective disorder (4.3%), and other mental disorders (8.9%). Among epilepsy patients, the proportion treated with valproate declined from 26.2 to 16.8%, but changed little in patients with bipolar disorder (9.3% vs. 8.0%). A total of 46.3% of valproate dispensations were issued by neurologists or psychiatrists and 29.6% by general practitioners, internal medicine specialists, or family doctors. Based on German claims data, a decline of valproate dispensations was shown for epilepsy patients of childbearing age, while the proportion in other indications has hardly changed since 2004.

  20. Direct-to-consumer advertising for bleeding disorders: a content analysis and expert evaluation of advertising claims.

    Science.gov (United States)

    Abel, G A; Neufeld, E J; Sorel, M; Weeks, J C

    2008-10-01

    In the United States, the Food and Drug Administration (FDA) requires that all direct-to-consumer advertising (DTCA) contain both an accurate statement of a medication's effects ('truth') and an even-handed discussion of its benefits and risks/adverse effects ('fair balance'). DTCA for medications to treat rare diseases such as bleeding disorders is unlikely to be given high priority for FDA review. We reviewed all DTCA for bleeding disorder products appearing in the patient-directed magazine HemeAware from January 2004 to June 2006. We categorized the information presented in each advertisement as benefit, risk/adverse effect, or neither, and assessed the amount of text and type size devoted to each. We also assessed the readability of each type of text using the Flesch Reading Ease Score (FRES, where a score of >or=65 is considered of average readability), and assessed the accuracy of the advertising claims utilizing a panel of five bleeding disorder experts. A total of 39 unique advertisements for 12 products were found. On average, approximately twice the amount of text was devoted to benefits as compared with risks/adverse effects, and the latter was more difficult to read [FRES of 32.0 for benefits vs. 20.5 for risks/adverse effects, a difference of 11.5 (95% CI: 4.5-18.5)]. Only about two-thirds of the advertising claims were considered by a majority of the experts to be based on at least low-quality evidence. As measured by our methods, print DTCA for bleeding disorders may not reach the FDA's standards of truth and fair balance.

  1. A Comprehensive Database and Analysis Framework To Incorporate Multiscale Data Types and Enable Integrated Analysis of Bioactive Polyphenols.

    Science.gov (United States)

    Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M

    2018-03-05

    The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative

  2. Knowledge Representation and Inference for Analysis and Design of Database and Tabular Rule-Based Systems

    Directory of Open Access Journals (Sweden)

    Antoni Ligeza

    2001-01-01

    Full Text Available Rulebased systems constitute a powerful tool for specification of knowledge in design and implementation of knowledge based systems. They provide also a universal programming paradigm for domains such as intelligent control, decision support, situation classification and operational knowledge encoding. In order to assure safe and reliable performance, such system should satisfy certain formal requirements, including completeness and consistency. This paper addresses the issue of analysis and verification of selected properties of a class of such system in a systematic way. A uniform, tabular scheme of single-level rule-based systems is considered. Such systems can be applied as a generalized form of databases for specification of data pattern (unconditional knowledge, or can be used for defining attributive decision tables (conditional knowledge in form of rules. They can also serve as lower-level components of a hierarchical multi-level control and decision support knowledge-based systems. An algebraic knowledge representation paradigm using extended tabular representation, similar to relational database tables is presented and algebraic bases for system analysis, verification and design support are outlined.

  3. Bibliometric analysis of theses and dissertations on prematurity in the Capes database.

    Science.gov (United States)

    Pizzani, Luciana; Lopes, Juliana de Fátima; Manzini, Mariana Gurian; Martinez, Claudia Maria Simões

    2012-01-01

    To perform a bibliometric analysis of theses and dissertations on prematurity in the Capes database from 1987 to 2009. This is a descriptive study that used the bibliometric approach for the production of indicators of scientific production. Operationally, the methodology was developed in four steps: 1) construction of the theoretical framework; 2) data collection sourced from the abstracts of theses and dissertations available in the Capes Thesis Database which presented the issue of prematurity in the period 1987 to 2009; 3) organization, processing and construction of bibliometric indicators; 4) analysis and interpretation of results. Increase in the scientific literature on prematurity during the period 1987 to 2009; production is represented mostly by dissertations; the institution that received prominence was the Universidade de São Paulo. The studies are directed toward the low birth weight and very low birth weight preterm newborn, encompassing the social, biological and multifactorial causes of prematurity. There is a qualified, diverse and substantial scientific literature on prematurity developed in various graduate programs of higher education institutions in Brazil.

  4. Factors Affecting Adjuvant Therapy in Stage III Pancreatic Cancer—Analysis of the National Cancer Database

    Directory of Open Access Journals (Sweden)

    Mridula Krishnan

    2017-08-01

    Full Text Available Background: Adjuvant therapy after curative resection is associated with survival benefit in stage III pancreatic cancer. We analyzed the factors affecting the outcome of adjuvant therapy in stage III pancreatic cancer and compared overall survival with different modalities of adjuvant treatment. Methods: This is a retrospective study of patients with stage III pancreatic cancer listed in the National Cancer Database (NCDB who were diagnosed between 2004 and 2012. Patients were stratified based on adjuvant therapy they received. Unadjusted Kaplan-Meier and multivariable Cox regression analysis were performed. Results: We analyzed a cohort included 1731 patients who were recipients of adjuvant therapy for stage III pancreatic cancer within the limits of our database. Patients who received adjuvant chemoradiation had the longest postdiagnosis survival time, followed by patients who received adjuvant chemotherapy, and finally patients who received no adjuvant therapy. On multivariate analysis, advancing age and patients with Medicaid had worse survival, whereas Spanish origin and lower Charlson comorbidity score had better survival. Conclusions: Our study is the largest trial using the NCDB addressing the effects of adjuvant therapy specifically in stage III pancreatic cancer. Within the limits of our study, survival benefit with adjuvant therapy was more apparent with longer duration from date of diagnosis.

  5. Database of prompt gamma rays from slow neutron capture for elemental analysis

    International Nuclear Information System (INIS)

    Firestone, R.B.; Choi, H.D.; Lindstrom, R.M.; Molnar, G.L.; Mughabghab, S.F.; Paviotti-Corcuera, R.; Revay, Zs; Trkov, A.; Zhou, C.M.; Zerkin, V.

    2004-01-01

    The increasing importance of Prompt Gamma-ray Activation Analysis (PGAA) in a broad range of applications is evident, and has been emphasized at many meetings related to this topic (e.g., Technical Consultants' Meeting, Use of neutron beams for low- and medium-flux research reactors: radiography and materials characterizations, IAEA Vienna, 4-7 May 1993, IAEA-TECDOC-837, 1993). Furthermore, an Advisory Group Meeting (AGM) for the Coordination of the Nuclear Structure and Decay Data Evaluators Network has stated that there is a need for a complete and consistent library of cold- and thermal neutron capture gamma ray and cross-section data (AGM held at Budapest,14-18 October 1996, INDC(NDS)-363); this AGM also recommended the organization of an IAEA CRP on the subject. The International Nuclear Data Committee (INDC) is the primary advisory body to the IAEA Nuclear Data Section on their nuclear data programs. At a biennial meeting in 1997, the INDC strongly recommended that the Nuclear Data Section support new measurements and update the database on Neutron-induced Prompt Gamma-ray Activation Analysis (21st INDC meeting, INDC/P(97)-20). As a consequence of the various recommendations, a CRP on ''Development of a Database for Prompt Gamma-ray Neutron Activation Analysis (PGAA)'' was initiated in 1999. Prior to this project, several consultants had defined the scope, objectives and tasks, as approved subsequently by the IAEA. Each CRP participant assumed responsibility for the execution of specific tasks. The results of their and other research work were discussed and approved by the participants in research co-ordination meetings (see Summary reports: INDC(NDS)-411, 2000; INDC(NDS)-424, 2001; and INDC(NDS)-443, 200). PGAA is a non-destructive radioanalytical method, capable of rapid or simultaneous ''in-situ'' multi-element analyses across the entire Periodic Table, from hydrogen to uranium. However, inaccurate and incomplete data were a significant hindrance in the

  6. Formalizing Probabilistic Safety Claims

    Science.gov (United States)

    Herencia-Zapana, Heber; Hagen, George E.; Narkawicz, Anthony J.

    2011-01-01

    A safety claim for a system is a statement that the system, which is subject to hazardous conditions, satisfies a given set of properties. Following work by John Rushby and Bev Littlewood, this paper presents a mathematical framework that can be used to state and formally prove probabilistic safety claims. It also enables hazardous conditions, their uncertainties, and their interactions to be integrated into the safety claim. This framework provides a formal description of the probabilistic composition of an arbitrary number of hazardous conditions and their effects on system behavior. An example is given of a probabilistic safety claim for a conflict detection algorithm for aircraft in a 2D airspace. The motivation for developing this mathematical framework is that it can be used in an automated theorem prover to formally verify safety claims.

  7. Individual-based versus aggregate meta-analysis in multi-database studies of pregnancy outcomes

    DEFF Research Database (Denmark)

    Selmer, Randi; Haglund, Bengt; Furu, Kari

    2016-01-01

    Purpose: Compare analyses of a pooled data set on the individual level with aggregate meta-analysis in a multi-database study. Methods: We reanalysed data on 2.3 million births in a Nordic register based cohort study. We compared estimated odds ratios (OR) for the effect of selective serotonin...... covariates in the pooled data set, and 1.53 (1.19–1.96) after country-optimized adjustment. Country-specific adjusted analyses at the substance level were not possible for RVOTO. Conclusion: Results of fixed effects meta-analysis and individual-based analyses of a pooled dataset were similar in this study...... reuptake inhibitors (SSRI) and venlafaxine use in pregnancy on any cardiovascular birth defect and the rare outcome right ventricular outflow tract obstructions (RVOTO). Common covariates included maternal age, calendar year, birth order, maternal diabetes, and co-medication. Additional covariates were...

  8. Discerning molecular interactions: A comprehensive review on biomolecular interaction databases and network analysis tools.

    Science.gov (United States)

    Miryala, Sravan Kumar; Anbarasu, Anand; Ramaiah, Sudha

    2018-02-05

    Computational analysis of biomolecular interaction networks is now gaining a lot of importance to understand the functions of novel genes/proteins. Gene interaction (GI) network analysis and protein-protein interaction (PPI) network analysis play a major role in predicting the functionality of interacting genes or proteins and gives an insight into the functional relationships and evolutionary conservation of interactions among the genes. An interaction network is a graphical representation of gene/protein interactome, where each gene/protein is a node, and interaction between gene/protein is an edge. In this review, we discuss the popular open source databases that serve as data repositories to search and collect protein/gene interaction data, and also tools available for the generation of interaction network, visualization and network analysis. Also, various network analysis approaches like topological approach and clustering approach to study the network properties and functional enrichment server which illustrates the functions and pathway of the genes and proteins has been discussed. Hence the distinctive attribute mentioned in this review is not only to provide an overview of tools and web servers for gene and protein-protein interaction (PPI) network analysis but also to extract useful and meaningful information from the interaction networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Practice databases and their uses in clinical research.

    Science.gov (United States)

    Tierney, W M; McDonald, C J

    1991-04-01

    A few large clinical information databases have been established within larger medical information systems. Although they are smaller than claims databases, these clinical databases offer several advantages: accurate and timely data, rich clinical detail, and continuous parameters (for example, vital signs and laboratory results). However, the nature of the data vary considerably, which affects the kinds of secondary analyses that can be performed. These databases have been used to investigate clinical epidemiology, risk assessment, post-marketing surveillance of drugs, practice variation, resource use, quality assurance, and decision analysis. In addition, practice databases can be used to identify subjects for prospective studies. Further methodologic developments are necessary to deal with the prevalent problems of missing data and various forms of bias if such databases are to grow and contribute valuable clinical information.

  10. Variations in Patterns of Utilization and Charges for the Care of Headache in North Carolina, 2000-2009: A Statewide Claims' Data Analysis.

    Science.gov (United States)

    Hurwitz, Eric L; Vassilaki, Maria; Li, Dongmei; Schneider, Michael J; Stevans, Joel M; Phillips, Reed B; Phelan, Shawn P; Lewis, Eugene A; Armstrong, Richard C

    2016-05-01

    The purpose of the study was to compare patterns of utilization and charges generated by medical doctors (MDs), doctors of chiropractic (DCs), and physical therapists (PTs) for the treatment of headache in North Carolina. Retrospective analysis of claims data from the North Carolina State Health Plan for Teachers and State Employees from 2000 to 2009. Data were extracted from Blue Cross Blue Shield of North Carolina for the North Carolina State Health Plan using International Classification of Diseases, Ninth Revision, diagnostic codes for headache. The claims were separated by individual provider type, combination of provider types, and referral patterns. The majority of patients and claims were in the MD-only or MD plus referral patterns. Chiropractic patterns represented less than 10% of patients. Care patterns with single-provider types and no referrals incurred the least charges on average for headache. When care did not include referral providers or services, MD with DC care was generally less expensive than MD care with PT. However, when combined with referral care, MD care with PT was generally less expensive. Compared with MD-only care, risk-adjusted charges (available 2006-2009) for patients in the middle risk quintile were significantly less for DC-only care. Utilization and expenditures for headache treatment increased from 2000 to 2009 across all provider groups. MD care represented the majority of total allowed charges in this study. MD care and DC care, alone or in combination, were overall the least expensive patterns of headache care. Risk-adjusted charges were significantly less for DC-only care. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  11. Clinical characteristics and outcomes of myxedema coma: Analysis of a national inpatient database in Japan.

    Science.gov (United States)

    Ono, Yosuke; Ono, Sachiko; Yasunaga, Hideo; Matsui, Hiroki; Fushimi, Kiyohide; Tanaka, Yuji

    2017-03-01

    Myxedema coma is a life-threatening and emergency presentation of hypothyroidism. However, the clinical features and outcomes of this condition have been poorly defined because of its rarity. We conducted a retrospective observational study of patients diagnosed with myxedema coma from July 2010 through March 2013 using a national inpatient database in Japan. We investigated characteristics, comorbidities, treatments, and in-hospital mortality of patients with myxedema coma. We identified 149 patients diagnosed with myxedema coma out of approximately 19 million inpatients in the database. The mean (standard deviation) age was 77 (12) years, and two-thirds of the patients were female. The overall proportion of in-hospital mortality among cases was 29.5%. The number of patients was highest in the winter season. Patients treated with steroids, catecholamines, or mechanical ventilation showed higher in-hospital mortality than those without. Variations in type and dosage of thyroid hormone replacement were not associated with in-hospital mortality. The most common comorbidity was cardiovascular diseases (40.3%). The estimated incidence of myxedema coma was 1.08 per million people per year in Japan. Multivariable logistic regression analysis revealed that higher age and use of catecholamines (with or without steroids) were significantly associated with higher in-hospital mortality. The present study identified the clinical characteristics and outcomes of patients with myxedema coma using a large-scale database. Myxedema coma mortality was independently associated with age and severe conditions requiring treatment with catecholamines. Copyright © 2016 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  12. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  13. Dynameomics: a multi-dimensional analysis-optimized database for dynamic protein data.

    Science.gov (United States)

    Kehl, Catherine; Simms, Andrew M; Toofanny, Rudesh D; Daggett, Valerie

    2008-06-01

    The Dynameomics project is our effort to characterize the native-state dynamics and folding/unfolding pathways of representatives of all known protein folds by way of molecular dynamics simulations, as described by Beck et al. (in Protein Eng. Des. Select., the first paper in this series). The data produced by these simulations are highly multidimensional in structure and multi-terabytes in size. Both of these features present significant challenges for storage, retrieval and analysis. For optimal data modeling and flexibility, we needed a platform that supported both multidimensional indices and hierarchical relationships between related types of data and that could be integrated within our data warehouse, as described in the accompanying paper directly preceding this one. For these reasons, we have chosen On-line Analytical Processing (OLAP), a multi-dimensional analysis optimized database, as an analytical platform for these data. OLAP is a mature technology in the financial sector, but it has not been used extensively for scientific analysis. Our project is further more unusual for its focus on the multidimensional and analytical capabilities of OLAP rather than its aggregation capacities. The dimensional data model and hierarchies are very flexible. The query language is concise for complex analysis and rapid data retrieval. OLAP shows great promise for the dynamic protein analysis for bioengineering and biomedical applications. In addition, OLAP may have similar potential for other scientific and engineering applications involving large and complex datasets.

  14. The Incidence of Postoperative Pneumonia in Various Surgical Subspecialties: A Dual Database Analysis.

    Science.gov (United States)

    Chughtai, Morad; Gwam, Chukwuweike U; Khlopas, Anton; Newman, Jared M; Curtis, Gannon L; Torres, Pedro A; Khan, Rafay; Mont, Michael A

    2017-07-25

    Pneumonia is the third most common postoperative complication. However, its epidemiology varies widely and is often difficult to assess. For a better understanding, we utilized two national databases to determine the incidence of postoperative pneumonia after various surgical procedures. Specifically, we used the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) and the Nationwide Inpatient Sample (NIS) to determine the incidence and yearly trends of postoperative pneumonia following orthopaedic, urologic, otorhinolaryngologic, cardiothoracic, neurosurgery, and general surgeries. The NIS and NSQIP databases from 2009-2013 were utilized. The Clinical Classification Software (CCS) for International Classification of Diseases, 9th edition (ICD-9) codes provided by the NIS database was used to identify all surgical subspecialty procedures. The incidence of postoperative pneumonia was identified as the total number of cases under each identifying CCS code that also had ICD-9 codes for postoperative pneumonia. In the NSQIP database, the surgical subspecialties were selected using the following identifying string variables provided by NSQIP: 1) "Orthopedics", 2) "Otolaryngology (ENT)", 3) "Urology", 4) "Neurosurgery", 5) "General Surgery", and 6) "Cardiac Surgery" and "Thoracic Surgery". Cardiac and thoracic surgery was merged to create the variable "Cardiothoracic Surgery". Postoperative pneumonia cases were extracted utilizing the available NSQIP nominal variables. All variables were used to isolate the incidences of postoperative pneumonia stratified by surgical specialty. A subsequent trend analysis was conducted to assess the associations between operative year and incidence of postoperative pneumonia. For all NIS surgeries, the incidence of postoperative pneumonia was 0.97% between 2009 and 2013. The incidence was highest among patients who underwent cardiothoracic surgery (3.3%) and urologic surgery (1.73%). Patients who

  15. The Method of Analysis Derived Coefficients of Database as a New Method of Historical Research (for Example, a Database of Ballistic Parameters of Naval Artillery

    Directory of Open Access Journals (Sweden)

    Nicholas W. Mitiukov

    2015-12-01

    Full Text Available In paper there is proposed a new method of historical research, based on analysis of derivatives coefficients of database (for example, the form factor in the database of ballistic data. This method has a much greater protection from subjectivism and direct falsification, compared with the analysis obtained directly from the source of the numerical series, as any intentional or unintentional distortion of the raw data provides a significant contrast ratio derived from the average sample values. Application of this method to the analysis of ballistic data base of naval artillery allowed to find the facts, forcing a new look at some of the events in the history data on the German naval artillery before World War I, probably overpriced for disinformation opponents of the Entente; during the First World War, Spain, apparently held secret talks with the firm Bofors ended purchase of Swedish shells; the first Russian naval rifled guns were created obvious based on the project Blackly, not Krupp as traditionally considered.

  16. Analysis, Design and Implementation of a Web Database With Oracle 8I

    National Research Council Canada - National Science Library

    Demiryurek, Ugur

    2001-01-01

    ....O served as the OS environment From the technical aspect, Database Management Systems, Web-Database Architectures, Server Extension Programs, Oracle8i as well as several other software and hardware...

  17. Web application and database modeling of traffic impact analysis using Google Maps

    Science.gov (United States)

    Yulianto, Budi; Setiono

    2017-06-01

    Traffic impact analysis (TIA) is a traffic study that aims at identifying the impact of traffic generated by development or change in land use. In addition to identifying the traffic impact, TIA is also equipped with mitigation measurement to minimize the arising traffic impact. TIA has been increasingly important since it was defined in the act as one of the requirements in the proposal of Building Permit. The act encourages a number of TIA studies in various cities in Indonesia, including Surakarta. For that reason, it is necessary to study the development of TIA by adopting the concept Transportation Impact Control (TIC) in the implementation of the TIA standard document and multimodal modeling. It includes TIA's standardization for technical guidelines, database and inspection by providing TIA checklists, monitoring and evaluation. The research was undertaken by collecting the historical data of junctions, modeling of the data in the form of relational database, building a user interface for CRUD (Create, Read, Update and Delete) the TIA data in the form of web programming with Google Maps libraries. The result research is a system that provides information that helps the improvement and repairment of TIA documents that exist today which is more transparent, reliable and credible.

  18. Analysis of TRMM-LIS Lightning and Related Microphysics Using a Cell-Scale Database

    Science.gov (United States)

    Leroy, Anita; Petersen, Walter A.

    2010-01-01

    Previous studies of tropical lightning activity using Tropical Rainfall Measurement Mission (TRMM) Lightning Imaging Sensor (LIS) data performed analyses of lightning behavior over mesoscale "feature" scales or over uniform grids. In order to study lightning and the governing ice microphysics intrinsic to thunderstorms at a more process-specific scale (i.e., the scale over which electrification processes and lightning occur in a "unit" thunderstorm), a new convective cell-scale database was developed by analyzing and refining the University of Utah's Precipitation Features database and retaining precipitation data parameters computed from the TRMM precipitation radar (PR), microwave imager (TMI) and LIS instruments. The resulting data base was to conduct a limited four-year study of tropical continental convection occurring over the Amazon Basin, Congo, Maritime Continent and the western Pacific Ocean. The analysis reveals expected strong correlations between lightning flash counts per cell and ice proxies, such as ice water path, minimum and average 85GHz brightness temperatures, and 18dBz echo top heights above the freezing level in all regimes, as well as regime-specific relationships between lighting flash counts and PR-derived surface rainfall rates. Additionally, radar CFADs were used to partition the 3D structure of cells in each regime at different flash counts. The resulting cell-scale analyses are compared to previous mesoscale feature and gridded studies wherever possible.

  19. Detailed tail proteomic analysis of axolotl (Ambystoma mexicanum) using an mRNA-seq reference database.

    Science.gov (United States)

    Demircan, Turan; Keskin, Ilknur; Dumlu, Seda Nilgün; Aytürk, Nilüfer; Avşaroğlu, Mahmut Erhan; Akgün, Emel; Öztürk, Gürkan; Baykal, Ahmet Tarık

    2017-01-01

    Salamander axolotl has been emerging as an important model for stem cell research due to its powerful regenerative capacity. Several advantages, such as the high capability of advanced tissue, organ, and appendages regeneration, promote axolotl as an ideal model system to extend our current understanding on the mechanisms of regeneration. Acknowledging the common molecular pathways between amphibians and mammals, there is a great potential to translate the messages from axolotl research to mammalian studies. However, the utilization of axolotl is hindered due to the lack of reference databases of genomic, transcriptomic, and proteomic data. Here, we introduce the proteome analysis of the axolotl tail section searched against an mRNA-seq database. We translated axolotl mRNA sequences to protein sequences and annotated these to process the LC-MS/MS data and identified 1001 nonredundant proteins. Functional classification of identified proteins was performed by gene ontology searches. The presence of some of the identified proteins was validated by in situ antibody labeling. Furthermore, we have analyzed the proteome expressional changes postamputation at three time points to evaluate the underlying mechanisms of the regeneration process. Taken together, this work expands the proteomics data of axolotl to contribute to its establishment as a fully utilized model. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  1. Respiratory infections research in afghanistan: bibliometric analysis with the database pubmed

    International Nuclear Information System (INIS)

    Pilsezek, F.H.

    2015-01-01

    Infectious diseases research in a low-income country like Afghanistan is important. Methods: In this study an internet-based database Pubmed was used for bibliometric analysis of infectious diseases research activity. Research publications entries in PubMed were analysed according to number of publications, topic, publication type, and country of investigators. Results: Between 2002-2011, 226 (77.7%) publications with the following research topics were identified: respiratory infections 3 (1.3%); parasites 8 (3.5%); diarrhoea 10 (4.4%); tuberculosis 10 (4.4%); human immunodeficiency virus (HIV) 11(4.9%); multi-drug resistant bacteria (MDR) 18(8.0%); polio 31(13.7%); leishmania 31(13.7%); malaria 46(20.4%). From 2002-2011, 11 (4.9%) publications were basic science laboratory-based research studies. Between 2002-2011, 8 (3.5%) publications from Afghan institutions were identified. Conclusion: In conclusion, the internet-based database Pubmed can be consulted to collect data for guidance of infectious diseases research activity of low-income countries. The presented data suggest that infectious diseases research in Afghanistan is limited for respiratory infections research, has few studies conducted by Afghan institutions, and limited laboratory-based research contributions. (author)

  2. RESPIRATORY INFECTIONS RESEARCH IN AFGHANISTAN: BIBLIOMETRIC ANALYSIS WITH THE DATABASE PUBMED.

    Science.gov (United States)

    Pilsczek, Florian H

    2015-01-01

    Infectious diseases research in a low-income country like Afghanistan is important. In this study an internet-based database Pubmed was used for bibliometric analysis of infectious diseases research activity. Research publications entries in PubMed were analysed according to number of publications, topic, publication type, and country of investigators. Between 2002-2011, 226 (77.7%) publications with the following research topics were identified: respiratory infections 3 (1.3%); parasites 8 (3.5%); diarrhoea 10 (4.4%); tuberculosis 10 (4.4%); human immunodeficiency virus (HIV) 11 (4.9%); multi-drug resistant bacteria (MDR) 18 (8.0%); polio 31 (13.7%); leishmania 31 (13.7%); malaria 46 (20.4%). From 2002-2011, 11 (4.9%) publications were basic science laboratory-based research studies. Between 2002-2011, 8 (3.5%) publications from Afghan institutions were identified. In conclusion, the internet-based database Pubmed can be consulted to collect data for guidance of infectious diseases research activity of low-income countries. The presented data suggest that infectious diseases research in Afghanistan is limited for respiratory infections research, has few studies conducted by Afghan institutions, and limited laboratory-based research contributions.

  3. Secondary analysis of a marketing research database reveals patterns in dairy product purchases over time.

    Science.gov (United States)

    Van Wave, Timothy W; Decker, Michael

    2003-04-01

    Development of a method using marketing research data to assess food purchase behavior and consequent nutrient availability for purposes of nutrition surveillance, evaluation of intervention effects, and epidemiologic studies of diet-health relationships. Data collected on household food purchases accrued over a 13-week period were selected by using Universal Product Code numbers and household characteristics from a marketing research database. Universal Product Code numbers for 39,408 dairy product purchases were linked to a standard reference for food composition to estimate the nutrient content of foods purchased over time. Two thousand one hundred sixty-one households located in Victoria, Texas, and surrounding communities who were active members of a frequent shopper program. Demographic characteristics of sample households and the nutrient content of their dairy product purchases were analyzed using frequency distribution, cross tabulation, analysis of variance, and t test procedures. A method for using marketing research data was successfully used to estimate household purchases of specific foods and their nutrient content from a marketing database containing hundreds of thousands of records. Distribution of dairy product purchases and their concomitant nutrients between Hispanic and non-Hispanic households were significant (P<.01, P<.001, respectively) and sustained over time. Purchase records from large, nationally representative panels of shoppers, such as those maintained by major market research companies, might be used to accomplish detailed longitudinal epidemiologic studies or surveillance of national food- and nutrient-purchasing patterns within and between countries and segments of their respective populations.

  4. Analysis of large brain MRI databases for investigating the relationships between brain, cognitive, and genetic polymorphisms

    International Nuclear Information System (INIS)

    Mazoyer, B.

    2006-01-01

    A major challenge for the years to come is the understanding of the brain-behaviour relationships, and in particular the investigation and quantification of the impact of genetic polymorphism on these relationships. In this framework, a promising experimental approach, which we will refer to as neuro-epidemiologic imaging, consists in acquiring multimodal (brain images, psychometric an d sociological data, genotypes) data in large (several hundreds or thousands ) cohorts of subjects. Processing of such large databases requires on first place the conception and implementation of automated 'pipelines', including image registration, spatial normalisation tissue segmentation, and multivariate statistical analysis. Given the number of images and data to be processed, such pipelines must be both fully automated and robust enough to be able to handle multi-center MRI data, e.g. having inhomogeneous characteristics in terms of resolution and contrast. This approach will be illustrated using two databases collected in aged healthy subjects, searching for the impact of genetic and environmental on two markers of brain aging, namely white matter hyper-signals, and grey matter atrophy. (author)

  5. Global volcanic earthquake swarm database and preliminary analysis of volcanic earthquake swarm duration

    Directory of Open Access Journals (Sweden)

    S. R. McNutt

    1996-06-01

    Full Text Available Global data from 1979 to 1989 pertaining to volcanic earthquake swarms have been compiled into a custom-designed relational database. The database is composed of three sections: 1 a section containing general information on volcanoes, 2 a section containing earthquake swarm data (such as dates of swarm occurrence and durations, and 3 a section containing eruption information. The most abundant and reliable parameter, duration of volcanic earthquake swarms, was chosen for preliminary analysis. The distribution of all swarm durations was found to have a geometric mean of 5.5 days. Precursory swarms were then separated from those not associated with eruptions. The geometric mean precursory swarm duration was 8 days whereas the geometric mean duration of swarms not associated with eruptive activity was 3.5 days. Two groups of precursory swarms are apparent when duration is compared with the eruption repose time. Swarms with durations shorter than 4 months showed no clear relationship with the eruption repose time. However, the second group, lasting longer than 4 months, showed a significant positive correlation with the log10 of the eruption repose period. The two groups suggest that different suites of physical processes are involved in the generation of volcanic earthquake swarms.

  6. OECD Structural Analysis Databases: Sectoral Principles in the Study of Markets for Goods and Services

    Directory of Open Access Journals (Sweden)

    Marina D. Simonova

    2015-01-01

    Full Text Available This study focuses on the characteristics of the information database of the OECD structural business statistics in the analysis of markets of goods and services, and macroeconomic trends. The system of indicators of structural statistics is presented in OECD publications and on-line access to a wide range of users. Collected data sources generated by the OECD offices are based on the national statistical offices of country-members, Russia and the BRICS. Data on the development of economic sectors are calculated according to the methodology of individual countries, regional and international standards: annual national accounts of countries, annual industry and business surveys, methodology of short-term indicators, statistics of international trade in goods. Data are aggregated on the basis of complex indicators statements of the enterprises' questionnaire and business surveys. Information system of structural statistics which is available and continuously updated, has certain features. It is composed of several subsystems: Structural Statistics on Industry and Services, EU entrepreneurship statistics, Indicators of Industry and Services, International Trade in Commodities Statistics. The grouping of industries is based on the International standard industrial classification of all economic activities (ISIC. Classification of foreign trade flows is made in accordance with the Harmonized system of description and coding of goods. The structural statistics databases comprise four classes of industries' grouping according to the technology intensity. The paper discusses the main reasons for the non-comparability of data in the subsystems in certain time intervals.

  7. Estimation of Missed Statin Prescription Use in an Administrative Claims Dataset.

    Science.gov (United States)

    Wade, Rolin L; Patel, Jeetvan G; Hill, Jerrold W; De, Ajita P; Harrison, David J

    2017-09-01

    Nonadherence to statin medications is associated with increased risk of cardiovascular disease and poses a challenge to lipid management in patients who are at risk for atherosclerotic cardiovascular disease. Numerous studies have examined statin adherence based on administrative claims data; however, these data may underestimate statin use in patients who participate in generic drug discount programs or who have alternative coverage. To estimate the proportion of patients with missing statin claims in a claims database and determine how missing claims affect commonly used utilization metrics. This retrospective cohort study used pharmacy data from the PharMetrics Plus (P+) claims dataset linked to the IMS longitudinal pharmacy point-of-sale prescription database (LRx) from January 1, 2012, through December 31, 2014. Eligible patients were represented in the P+ and LRx datasets, had ≥1 claim for a statin (index claim) in either database, and had ≥ 24 months of continuous enrollment in P+. Patients were linked between P+ and LRx using a deterministic method. Duplicate claims between LRx and P+ were removed to produce a new dataset comprised of P+ claims augmented with LRx claims. Statin use was then compared between P+ and the augmented P+ dataset. Utilization metrics that were evaluated included percentage of patients with ≥ 1 missing statin claim over 12 months in P+; the number of patients misclassified as new users in P+; the number of patients misclassified as nonstatin users in P+; the change in 12-month medication possession ratio (MPR) and proportion of days covered (PDC) in P+; the comparison between P+ and LRx of classifications of statin treatment patterns (statin intensity and patients with treatment modifications); and the payment status for missing statin claims. Data from 965,785 patients with statin claims in P+ were analyzed (mean age 56.6 years; 57% male). In P+, 20.1% had ≥ 1 missing statin claim post-index; 13.7% were misclassified as

  8. Oncology patient-reported claims: maximising the chance for success.

    Science.gov (United States)

    Kitchen, H; Rofail, D; Caron, M; Emery, M-P

    2011-01-01

    To review Patient Reported Outcome (PRO) labelling claims achieved in oncology in Europe and in the United States and consider the benefits, and challenges faced. PROLabels database was searched to identify oncology products with PRO labelling approved in Europe since 1995 or in the United States since 1998. The US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) websites and guidance documents were reviewed. PUBMED was searched for articles on PRO claims in oncology. Among all oncology products approved, 22 were identified with PRO claims; 10 in the United States, 7 in Europe, and 5 in both. The language used in the labelling was limited to benefit (e.g. "…resulted in symptom benefits by significantly prolonging time to deterioration in cough, dyspnoea, and pain, versus placebo") and equivalence (e.g. "no statistical differences were observed between treatment groups for global QoL"). Seven products used a validated HRQoL tool; two used symptom tools; two used both; seven used single-item symptom measures (one was unknown). The following emerged as likely reasons for success: ensuring systematic PRO data collection; clear rationale for pre-specified endpoints; adequately powered trials to detect differences and clinically significant changes; adjusting for multiplicity; developing an a priori statistical analysis plan including primary and subgroup analyses, dealing with missing data, pooling multiple-site data; establishing clinical versus statistical significance; interpreting failure to detect change. End-stage patient drop-out rates and cessation of trials due to exceptional therapeutic benefit pose significant challenges to demonstrating treatment PRO improvement. PRO labelling claims demonstrate treatment impact and the trade-off between efficacy and side effects ultimately facilitating product differentiation. Reliable and valid instruments specific to the desired language, claim, and target population are required. Practical

  9. A logistic regression model for Ghana National Health Insurance claims

    Directory of Open Access Journals (Sweden)

    Samuel Antwi

    2013-07-01

    Full Text Available In August 2003, the Ghanaian Government made history by implementing the first National Health Insurance System (NHIS in Sub-Saharan Africa. Within three years, over half of the country’s population had voluntarily enrolled into the National Health Insurance Scheme. This study had three objectives: 1 To estimate the risk factors that influences the Ghana national health insurance claims. 2 To estimate the magnitude of each of the risk factors in relation to the Ghana national health insurance claims. In this work, data was collected from the policyholders of the Ghana National Health Insurance Scheme with the help of the National Health Insurance database and the patients’ attendance register of the Koforidua Regional Hospital, from 1st January to 31st December 2011. Quantitative analysis was done using the generalized linear regression (GLR models. The results indicate that risk factors such as sex, age, marital status, distance and length of stay at the hospital were important predictors of health insurance claims. However, it was found that the risk factors; health status, billed charges and income level are not good predictors of national health insurance claim. The outcome of the study shows that sex, age, marital status, distance and length of stay at the hospital are statistically significant in the determination of the Ghana National health insurance premiums since they considerably influence claims. We recommended, among other things that, the National Health Insurance Authority should facilitate the institutionalization of the collection of appropriate data on a continuous basis to help in the determination of future premiums.

  10. The Princeton Protein Orthology Database (P-POD): a comparative genomics analysis tool for biologists.

    OpenAIRE

    Sven Heinicke; Michael S Livstone; Charles Lu; Rose Oughtred; Fan Kang; Samuel V Angiuoli; Owen White; David Botstein; Kara Dolinski

    2007-01-01

    Many biological databases that provide comparative genomics information and tools are now available on the internet. While certainly quite useful, to our knowledge none of the existing databases combine results from multiple comparative genomics methods with manually curated information from the literature. Here we describe the Princeton Protein Orthology Database (P-POD, http://ortholog.princeton.edu), a user-friendly database system that allows users to find and visualize the phylogenetic r...

  11. Health care resource use and costs associated with possible side effects of high oral corticosteroid use in asthma: a claims-based analysis.

    Science.gov (United States)

    Luskin, Allan T; Antonova, Evgeniya N; Broder, Michael S; Chang, Eunice Y; Omachi, Theodore A; Ledford, Dennis K

    2016-01-01

    The objective of this study was to estimate the prevalence of possible oral corticosteroid (OCS)-related side effects and health care resource use and costs in patients with asthma. This was a cross-sectional, matched-cohort, retrospective study using a commercial claims database. Adults with asthma diagnosis codes and evidence of asthma medication use were studied. Patients with high OCS use (≥30 days of OCS annually) were divided into those who did versus those who did not experience OCS-related possible side effects. Their health care resource use and costs were compared using linear regression or negative binomial regression models, adjusting for age, sex, geographic region, Charlson Comorbidity Index score, and chronic obstructive pulmonary disease status. After adjustment, high OCS users with possible side effects were more likely to have office visits (23.0 vs 19.6; P possible side effects. Emergency department visits were similar between the groups. High OCS users with possible side effects had higher adjusted total annual mean health care costs ($25,168) than those without such side effects ($21,882; P =0.009). Among high OCS users, patients with possible OCS-related side effects are more likely to use health care services than those without such side effects. Although OCS may help control asthma and manage exacerbations, OCS side effects may result in additional health care resource use and costs, highlighting the need for OCS-sparing asthma therapies.

  12. A New Global Coastal Database for Impact and Vulnerability Analysis to Sea-Level Rise

    NARCIS (Netherlands)

    Vafeidis, A.T.; Nicholls, R.J.; McFadden, L.; Tol, R.S.J.; Hinkel, J.; Spencer, T.; Grashoff, P.S.; Boot, G.; Klein, R.J.T.

    2008-01-01

    A new global coastal database has been developed within the context of the DINAS-COAST project. The database covers the world's coasts, excluding Antarctica, and includes information on more than 80 physical, ecological, and socioeconomic parameters of the coastal zone. The database provides the

  13. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  14. Criteria and algorithms for constructing reliable databases for statistical analysis of disruptions at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Cannas, B.; Fanni, A.; Pautasso, G.; Sias, G.; Sonato, P.

    2009-01-01

    The present understanding of disruption physics has not gone so far as to provide a mathematical model describing the onset of this instability. A disruption prediction system, based on a statistical analysis of the diagnostic signals recorded during the experiments, would allow estimating the probability of a disruption to take place. A crucial point for a good design of such a prediction system is the appropriateness of the data set. This paper reports the details of the database built to train a disruption predictor based on neural networks for ASDEX Upgrade. The criteria of pulses selection, the analyses performed on plasma parameters and the implemented pre-processing algorithms, are described. As an example of application, a short description of the disruption predictor is reported.

  15. Building a fingerprint database for modern art materials: PIXE analysis of commercial painting and drawing media

    Energy Technology Data Exchange (ETDEWEB)

    Zucchiatti, A., E-mail: alessandro.zucchiatti@uam.es [Centro de Micro Análisis de Materiales, Universidad Autónoma de Madrid, C/ Faraday 3, Madrid (Spain); Climent-Font, A. [Centro de Micro Análisis de Materiales, Universidad Autónoma de Madrid, C/ Faraday 3, Madrid (Spain); Departamento de Física Aplicada, Universidad Autónoma de Madrid, Avda. Francisco Tomás y Valiente 7, 28049 Madrid (Spain); Gómez-Tejedor, J. García [Museo Nacional Centro de Arte Reina Sofia, Departamento de Restauración, Calle Santa Isabel, 52, 28012 Madrid (Spain); Martina, S. [Centro de Micro Análisis de Materiales, Universidad Autónoma de Madrid, C/ Faraday 3, Madrid (Spain); Universitá degli Studi di Torino, Turin (Italy); Muro García, C.; Gimeno, E.; Hernández, P.; Canelo, N. [Museo Nacional Centro de Arte Reina Sofia, Departamento de Restauración, Calle Santa Isabel, 52, 28012 Madrid (Spain)

    2015-11-15

    We have examined by PIXE (and by RBS in parallel) about 180 samples of commercial painting and drawing media including pencils, pastels, waxes, inks, paints and paper. Given the high PIXE sensitivity we produced X-ray spectra at low collected charges and currents, operating in good conservation conditions. For drawing media containing inorganic components or a unique marker element, we have defined colouring agent fingerprints which correspond, when applicable, to the composition declared by the manufacturer. For thin layers, the ratios of areal densities of elements are close to those expected given the declared composition, which is promising from the perspective of compiling the database. The quantitative PIXE and RBS analysis of part of the set of samples is provided.

  16. Covariant Evolutionary Event Analysis for Base Interaction Prediction Using a Relational Database Management System for RNA.

    Science.gov (United States)

    Xu, Weijia; Ozer, Stuart; Gutell, Robin R

    2009-01-01

    With an increasingly large amount of sequences properly aligned, comparative sequence analysis can accurately identify not only common structures formed by standard base pairing but also new types of structural elements and constraints. However, traditional methods are too computationally expensive to perform well on large scale alignment and less effective with the sequences from diversified phylogenetic classifications. We propose a new approach that utilizes coevolutional rates among pairs of nucleotide positions using phylogenetic and evolutionary relationships of the organisms of aligned sequences. With a novel data schema to manage relevant information within a relational database, our method, implemented with a Microsoft SQL Server 2005, showed 90% sensitivity in identifying base pair interactions among 16S ribosomal RNA sequences from Bacteria, at a scale 40 times bigger and 50% better sensitivity than a previous study. The results also indicated covariation signals for a few sets of cross-strand base stacking pairs in secondary structure helices, and other subtle constraints in the RNA structure.

  17. [Healthier after Psychotherapy? Analysis of Claims Data (Lower Saxony, Germany) on Sickness Absence Duration before and after Outpatient Psychotherapy].

    Science.gov (United States)

    Epping, Jelena; de Zwaan, Martina; Geyer, Siegfried

    2017-11-17

    Introduction In employed populations sickness absence can be used as a good indicator of health status. In the present study, it was examined how periods of sickness absence are developing within one year before and after psychotherapy under comparison of three types of psychotherapy (behavior therapy, psychodynamic psychotherapy, and psychoanalysis), all fully covered by statutory health insurance. Methods and data The analyses were performed with pseudonymized claims data from the AOK Niedersachsen, a statutory health insurance (N=2,900,065 insured). Certified sickness absences before and after psychotherapy were examined for 9,916 patients. Parallelized controls were used to build a comparison of the length of sickness absences. Analyses were performed separately for women and for men. Results Within one year before starting psychotherapy, patients had longer sickness absences than controls on average. There was a reduction in the length of sickness absence of 20 days (median) within one year before to 12 days (median) within one year after the psychotherapy. The obtained differences between types of psychotherapy were considerable. Discussion Differences in terms of sickness absences may in part be explained by socio-demographic differences. Patients who underwent psychoanalysis were younger and had higher educational levels. However, it remains unclear why the differences of sickness absence periods were that high. It has to be discussed whether self-selection of patients with better health into psychoanalysis had occurred. Conclusions Patients undergoing psychoanalysis differ from patients who underwent other types of psychotherapy in terms of their duration of sickness absence as well as socio-demographic profile. Thus, due to differences in the composition of patients future research in psychotherapy will have to differentiate by type of psychotherapy. © Georg Thieme Verlag KG Stuttgart · New York.

  18. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    Science.gov (United States)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  19. Analysis of factors associated with hiccups based on the Japanese Adverse Drug Event Report database.

    Science.gov (United States)

    Hosoya, Ryuichiro; Uesawa, Yoshihiro; Ishii-Nozawa, Reiko; Kagaya, Hajime

    2017-01-01

    Hiccups are occasionally experienced by most individuals. Although hiccups are not life-threatening, they may lead to a decline in quality of life. Previous studies showed that hiccups may occur as an adverse effect of certain medicines during chemotherapy. Furthermore, a male dominance in hiccups has been reported. However, due to the limited number of studies conducted on this phenomenon, debate still surrounds the few factors influencing hiccups. The present study aimed to investigate the influence of medicines and patient characteristics on hiccups using a large-sized adverse drug event report database and, specifically, the Japanese Adverse Drug Event Report (JADER) database. Cases of adverse effects associated with medications were extracted from JADER, and Fisher's exact test was performed to assess the presence or absence of hiccups for each medication. In a multivariate analysis, we conducted a multiple logistic regression analysis using medication and patient characteristic variables exhibiting significance. We also examined the role of dexamethasone in inducing hiccups during chemotherapy. Medicines associated with hiccups included dexamethasone, levofolinate, fluorouracil, oxaliplatin, carboplatin, and irinotecan. Patient characteristics associated with hiccups included a male gender and greater height. The combination of anti-cancer agent and dexamethasone use was noted in more than 95% of patients in the dexamethasone-use group. Hiccups also occurred in patients in the anti-cancer agent-use group who did not use dexamethasone. Most of the medications that induce hiccups are used in chemotherapy. The results of the present study suggest that it is possible to predict a high risk of hiccups using patient characteristics. We confirmed that dexamethasone was the drug that has the strongest influence on the induction of hiccups. However, the influence of anti-cancer agents on the induction of hiccups cannot be denied. We consider the results of the present

  20. Pediatric reduction mammaplasty: A retrospective analysis of the Kids' Inpatient Database (KID).

    Science.gov (United States)

    Soleimani, Tahereh; Evans, Tyler A; Sood, Rajiv; Hadad, Ivan; Socas, Juan; Flores, Roberto L; Tholpady, Sunil S

    2015-09-01

    Pediatric breast reduction mammaplasty is a procedure commonly performed in children suffering from excess breast tissue, back pain, and social anxiety. Minimal information exists regarding demographics, epidemiology, and complications in adolescents. As health care reform progresses, investigating the socioeconomic and patient-related factors affecting cost and operative outcomes is essential. The Kids' Inpatient Database (KID) was used from 2000 to 2009. Patients with an International Classification of Diseases, 9th Revision code of macromastia and procedure code of reduction mammaplasty 20 and less were included. Demographic data, including age, sex, payer mix, and location, were collected. Significant independent variables associated with complications and duration of stay were identified with bivariate and multiple regression analysis. A total of 1,345 patients between the ages 12 and 20 were evaluated. The majority of patients were white (64%), from a zip code with greatest income (36%), and had private insurance (75%). Overall comorbidity and complication rates were 30% and 3.2%, respectively. Duration of stay was associated with race, income quartile, insurance type, having complications, and hospital type. African-American race, Medicaid, lower income, and private-investor owned hospitals were predictive of greater hospital charges. In this large retrospective database analysis, pediatric reduction mammaplasty had a relatively low early complication rate and short duration of stay. Complications, total charges, and duration of stay discrepancies were associated with race, location, and socioeconomic status. Although demonstrably safe, this is the first study demonstrating the negative effect of race and socioeconomic status on a completely elective procedure involving children. These results demonstrate the intricate association between socioeconomic and patient-related factors influencing overall outcomes in the pediatric population. Copyright © 2015

  1. SSA Disability Claim Data

    Data.gov (United States)

    Social Security Administration — The dataset includes fiscal year data for initial claims for SSA disability benefits that were referred to a state agency for a disability determination. Specific...

  2. Development of a database for prompt γ-ray neutron activation analysis. Summary report of the second research coordination meeting

    International Nuclear Information System (INIS)

    Lone, M.A.; Mughabghab, S.F.; Paviotti-Corcuera, R.

    2001-06-01

    This report summarizes the presentations, recommendations and conclusions of the Second Research Co-ordination Meeting on Development of a Database for Prompt γ-ray Neutron Activation Analysis. The purpose of this meeting was to review results achieved on the development of the database, discuss further developments and planning of the products ol this CRP. Actions to be taken were agreed upon with the aim to complete the project by the end of 2002. (author)

  3. Database of prompt gamma rays from slow neutron capture forelemental analysis

    Energy Technology Data Exchange (ETDEWEB)

    Firestone, R.B.; Choi, H.D.; Lindstrom, R.M.; Molnar, G.L.; Mughabghab, S.F.; Paviotti-Corcuera, R.; Revay, Zs; Trkov, A.; Zhou,C.M.; Zerkin, V.

    2004-12-31

    The increasing importance of Prompt Gamma-ray ActivationAnalysis (PGAA) in a broad range of applications is evident, and has beenemphasized at many meetings related to this topic (e.g., TechnicalConsultants' Meeting, Use of neutron beams for low- andmedium-fluxresearch reactors: radiography and materialscharacterizations, IAEA Vienna, 4-7 May 1993, IAEA-TECDOC-837, 1993).Furthermore, an Advisory Group Meeting (AGM) for the Coordination of theNuclear Structure and Decay Data Evaluators Network has stated that thereis a need for a complete and consistent library of cold- and thermalneutron capture gammaray and cross-section data (AGM held at Budapest,14-18 October 1996, INDC(NDS)-363); this AGM also recommended theorganization of an IAEA CRP on the subject. The International NuclearData Committee (INDC) is the primary advisory body to the IAEA NuclearData Section on their nuclear data programmes. At a biennial meeting in1997, the INDC strongly recommended that the Nuclear Data Section supportnew measurements andupdate the database on Neutron-induced PromptGamma-ray Activation Analysis (21st INDC meeting, INDC/P(97)-20). As aconsequence of the various recommendations, a CRP on "Development of aDatabase for Prompt Gamma-ray Neutron Activation Analysis (PGAA)" wasinitiated in 1999. Prior to this project, several consultants had definedthe scope, objectives and tasks, as approved subsequently by the IAEA.Each CRP participant assumed responsibility for the execution of specifictasks. The results of their and other research work were discussed andapproved by the participants in research co-ordination meetings (seeSummary reports: INDC(NDS)-411, 2000; INDC(NDS)-424, 2001; andINDC(NDS)-443, 200). PGAA is a non-destructive radioanalytical method,capable of rapid or simultaneous "in-situ" multi-element analyses acrossthe entire Periodic Table, from hydrogen to uranium. However, inaccurateand incomplete data were a significant hindrance in the qualitative andquantitative

  4. Genome cluster database. A sequence family analysis platform for Arabidopsis and rice.

    Science.gov (United States)

    Horan, Kevin; Lauricha, Josh; Bailey-Serres, Julia; Raikhel, Natasha; Girke, Thomas

    2005-05-01

    The genome-wide protein sequences from Arabidopsis (Arabidopsis thaliana) and rice (Oryza sativa) spp. japonica were clustered into families using sequence similarity and domain-based clustering. The two fundamentally different methods resulted in separate cluster sets with complementary properties to compensate the limitations for accurate family analysis. Functional names for the identified families were assigned with an efficient computational approach that uses the description of the most common molecular function gene ontology node within each cluster. Subsequently, multiple alignments and phylogenetic trees were calculated for the assembled families. All clustering results and their underlying sequences were organized in the Web-accessible Genome Cluster Database (http://bioinfo.ucr.edu/projects/GCD) with rich interactive and user-friendly sequence family mining tools to facilitate the analysis of any given family of interest for the plant science community. An automated clustering pipeline ensures current information for future updates in the annotations of the two genomes and clustering improvements. The analysis allowed the first systematic identification of family and singlet proteins present in both organisms as well as those restricted to one of them. In addition, the established Web resources for mining these data provide a road map for future studies of the composition and structure of protein families between the two species.

  5. Pediatric radiology malpractice claims - characteristics and comparison to adult radiology claims

    International Nuclear Information System (INIS)

    Breen, Micheal A.; Taylor, George A.; Dwyer, Kathy; Yu-Moe, Winnie

    2017-01-01

    Medical malpractice is the primary method by which people who believe they have suffered an injury in the course of medical care seek compensation in the United States and Canada. An increasing body of research demonstrates that failure to correctly diagnose is the most common allegation made in malpractice claims against radiologists. Since the 1994 survey by the Society of Chairmen of Radiology in Children's Hospitals (SCORCH), no other published studies have specifically examined the frequency or clinical context of malpractice claims against pediatric radiologists or arising from pediatric imaging interpretation. We hypothesize that the frequency, character and outcome of malpractice claims made against pediatric radiologists differ from those seen in general radiology practice. We searched the Controlled Risk Insurance Co. (CRICO) Strategies' Comparative Benchmarking System (CBS), a private repository of approximately 350,000 open and closed medical malpractice claims in the United States, for claims related to pediatric radiology. We further queried these cases for the major allegation, the clinical environment in which the claim arose, the clinical severity of the alleged injury, indemnity paid (if payment was made), primary imaging modality involved (if applicable) and primary International Classification of Diseases, 9th revision (ICD-9) diagnosis underlying the claim. There were a total of 27,056 fully coded claims of medical malpractice in the CBS database in the 5-year period between Jan. 1, 2010, and Dec. 31, 2014. Of these, 1,472 cases (5.4%) involved patients younger than 18 years. Radiology was the primary service responsible for 71/1,472 (4.8%) pediatric cases. There were statistically significant differences in average payout for pediatric radiology claims ($314,671) compared to adult radiology claims ($174,033). The allegations were primarily diagnosis-related in 70% of pediatric radiology claims. The most common imaging modality implicated in

  6. Pediatric radiology malpractice claims - characteristics and comparison to adult radiology claims

    Energy Technology Data Exchange (ETDEWEB)

    Breen, Micheal A.; Taylor, George A. [Boston Children' s Hospital, Department of Radiology, Boston, MA (United States); Dwyer, Kathy; Yu-Moe, Winnie [CRICO Risk Management Foundation, Boston, MA (United States)

    2017-06-15

    Medical malpractice is the primary method by which people who believe they have suffered an injury in the course of medical care seek compensation in the United States and Canada. An increasing body of research demonstrates that failure to correctly diagnose is the most common allegation made in malpractice claims against radiologists. Since the 1994 survey by the Society of Chairmen of Radiology in Children's Hospitals (SCORCH), no other published studies have specifically examined the frequency or clinical context of malpractice claims against pediatric radiologists or arising from pediatric imaging interpretation. We hypothesize that the frequency, character and outcome of malpractice claims made against pediatric radiologists differ from those seen in general radiology practice. We searched the Controlled Risk Insurance Co. (CRICO) Strategies' Comparative Benchmarking System (CBS), a private repository of approximately 350,000 open and closed medical malpractice claims in the United States, for claims related to pediatric radiology. We further queried these cases for the major allegation, the clinical environment in which the claim arose, the clinical severity of the alleged injury, indemnity paid (if payment was made), primary imaging modality involved (if applicable) and primary International Classification of Diseases, 9th revision (ICD-9) diagnosis underlying the claim. There were a total of 27,056 fully coded claims of medical malpractice in the CBS database in the 5-year period between Jan. 1, 2010, and Dec. 31, 2014. Of these, 1,472 cases (5.4%) involved patients younger than 18 years. Radiology was the primary service responsible for 71/1,472 (4.8%) pediatric cases. There were statistically significant differences in average payout for pediatric radiology claims ($314,671) compared to adult radiology claims ($174,033). The allegations were primarily diagnosis-related in 70% of pediatric radiology claims. The most common imaging modality

  7. Pediatric radiology malpractice claims - characteristics and comparison to adult radiology claims.

    Science.gov (United States)

    Breen, Micheál A; Dwyer, Kathy; Yu-Moe, Winnie; Taylor, George A

    2017-06-01

    Medical malpractice is the primary method by which people who believe they have suffered an injury in the course of medical care seek compensation in the United States and Canada. An increasing body of research demonstrates that failure to correctly diagnose is the most common allegation made in malpractice claims against radiologists. Since the 1994 survey by the Society of Chairmen of Radiology in Children's Hospitals (SCORCH), no other published studies have specifically examined the frequency or clinical context of malpractice claims against pediatric radiologists or arising from pediatric imaging interpretation. We hypothesize that the frequency, character and outcome of malpractice claims made against pediatric radiologists differ from those seen in general radiology practice. We searched the Controlled Risk Insurance Co. (CRICO) Strategies' Comparative Benchmarking System (CBS), a private repository of approximately 350,000 open and closed medical malpractice claims in the United States, for claims related to pediatric radiology. We further queried these cases for the major allegation, the clinical environment in which the claim arose, the clinical severity of the alleged injury, indemnity paid (if payment was made), primary imaging modality involved (if applicable) and primary International Classification of Diseases, 9th revision (ICD-9) diagnosis underlying the claim. There were a total of 27,056 fully coded claims of medical malpractice in the CBS database in the 5-year period between Jan. 1, 2010, and Dec. 31, 2014. Of these, 1,472 cases (5.4%) involved patients younger than 18 years. Radiology was the primary service responsible for 71/1,472 (4.8%) pediatric cases. There were statistically significant differences in average payout for pediatric radiology claims ($314,671) compared to adult radiology claims ($174,033). The allegations were primarily diagnosis-related in 70% of pediatric radiology claims. The most common imaging modality implicated in

  8. Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.

    Science.gov (United States)

    Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing

    2018-04-06

    Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.

  9. Causes of death in long-term lung cancer survivors: a SEER database analysis.

    Science.gov (United States)

    Abdel-Rahman, Omar

    2017-07-01

    Long-term (>5 years) lung cancer survivors represent a small but distinct subgroup of lung cancer patients and information about the causes of death of this subgroup is scarce. The Surveillance, Epidemiology and End Results (SEER) database (1988-2008) was utilized to determine the causes of death of long-term survivors of lung cancer. Survival analysis was conducted using Kaplan-Meier analysis and multivariate analysis was conducted using a Cox proportional hazard model. Clinicopathological characteristics and survival outcomes were assessed for the whole cohort. A total of 78,701 lung cancer patients with >5 years survival were identified. This cohort included 54,488 patients surviving 5-10 years and 24,213 patients surviving >10 years. Among patients surviving 5-10 years, 21.8% were dead because of primary lung cancer, 10.2% were dead because of other cancers, 6.8% were dead because of cardiac disease and 5.3% were dead because of non-malignant pulmonary disease. Among patients surviving >10 years, 12% were dead because of primary lung cancer, 6% were dead because of other cancers, 6.9% were dead because of cardiac disease and 5.6% were dead because of non-malignant pulmonary disease. On multivariate analysis, factors associated with longer cardiac-disease-specific survival in multivariate analysis include younger age at diagnosis (p death from primary lung cancer is still significant among other causes of death even 20 years after diagnosis of lung cancer. Moreover, cardiac as well as non-malignant pulmonary causes contribute a considerable proportion of deaths in long-term lung cancer survivors.

  10. Health Claims Data Warehouse (HCDW)

    Data.gov (United States)

    Office of Personnel Management — The Health Claims Data Warehouse (HCDW) will receive and analyze health claims data to support management and administrative purposes. The Federal Employee Health...

  11. Analysis of Handling Processes of Record Versions in NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Yu. A. Grigorev

    2015-01-01

    Full Text Available This article investigates the handling processes versions of a record in NoSQL databases. The goal of this work is to develop a model, which enables users both to handle record versions and work with a record simultaneously. This model allows us to estimate both a time distribution for users to handle record versions and a distribution of the count of record versions. With eventual consistency (W=R=1 there is a possibility for several users to update any record simultaneously. In this case, several versions of records with the same key will be stored in database. When reading, the user obtains all versions, handles them, and saves a new version, while older versions are deleted. According to the model, the user’s time for handling the record versions consists of two parts: random handling time of each version and random deliberation time for handling a result. Record saving time and records deleting time are much less than handling time, so, they are ignored in the model. The paper offers two model variants. According to the first variant, client's handling time of one record version is calculated as the sum of random handling times of one version based on the count of record versions. This variant ignores explicitly the fact that handling time of record versions may depend on the update count, performed by the other users between the sequential updates of the record by the current client. So there is the second variant, which takes this feature into consideration. The developed models were implemented in the GPSS environment. The model experiments with different counts of clients and different ratio between one record handling time and results deliberation time were conducted. The analysis showed that despite the resemblance of model variants, a difference in change nature between average values of record versions count and handling time is significant. In the second variant dependences of the average count of record versions in database and

  12. Detecting medication errors in the New Zealand pharmacovigilance database: a retrospective analysis.

    Science.gov (United States)

    Kunac, Desireé L; Tatley, Michael V

    2011-01-01

    Despite the traditional focus being adverse drug reactions (ADRs), pharmacovigilance centres have recently been identified as a potentially rich and important source of medication error data. To identify medication errors in the New Zealand Pharmacovigilance database (Centre for Adverse Reactions Monitoring [CARM]), and to describe the frequency and characteristics of these events. A retrospective analysis of the CARM pharmacovigilance database operated by the New Zealand Pharmacovigilance Centre was undertaken for the year 1 January-31 December 2007. All reports, excluding those relating to vaccines, clinical trials and pharmaceutical company reports, underwent a preventability assessment using predetermined criteria. Those events deemed preventable were subsequently classified to identify the degree of patient harm, type of error, stage of medication use process where the error occurred and origin of the error. A total of 1412 reports met the inclusion criteria and were reviewed, of which 4.3% (61/1412) were deemed preventable. Not all errors resulted in patient harm: 29.5% (18/61) were 'no harm' errors but 65.5% (40/61) of errors were deemed to have been associated with some degree of patient harm (preventable adverse drug events [ADEs]). For 5.0% (3/61) of events, the degree of patient harm was unable to be determined as the patient outcome was unknown. The majority of preventable ADEs (62.5% [25/40]) occurred in adults aged 65 years and older. The medication classes most involved in preventable ADEs were antibacterials for systemic use and anti-inflammatory agents, with gastrointestinal and respiratory system disorders the most common adverse events reported. For both preventable ADEs and 'no harm' events, most errors were incorrect dose and drug therapy monitoring problems consisting of failures in detection of significant drug interactions, past allergies or lack of necessary clinical monitoring. Preventable events were mostly related to the prescribing and

  13. International Reactor Physics Handbook Database and Analysis Tool (IDAT) - IDAT user manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IRPhEP Database and Analysis Tool (IDAT) was first released in 2013 and is included on the DVD. This database and corresponding user interface allows easy access to handbook information. Selected information from each configuration was entered into IDAT, such as the measurements performed, benchmark values, calculated values and materials specifications of the benchmark. In many cases this is supplemented with calculated data such as neutron balance data, spectra data, k-eff nuclear data sensitivities, and spatial reaction rate plots. IDAT accomplishes two main objectives: 1. Allow users to search the handbook for experimental configurations that satisfy their input criteria. 2. Allow users to trend results and identify suitable benchmarks experiments for their application. IDAT provides the user with access to several categories of calculated data, including: - 1-group neutron balance data for each configuration with individual isotope contributions in the reactor system. - Flux and other reaction rates spectra in a 299-group energy scheme. Plotting capabilities were implemented into IDAT allowing the user to compare the spectra of selected configurations in the original fine energy structure or on any user-defined broader energy structure. - Sensitivity coefficients (percent changes of k-effective due to elementary change of basic nuclear data) for the major nuclides and nuclear processes in a 238-group energy structure. IDAT is actively being developed. Those approved to access the online version of the handbook will also have access to an online version of IDAT. As May 2013 marks the first release, IDAT may contain data entry errors and omissions. The handbook remains the primary source of reactor physics benchmark data. A copy of IDAT user's manual is attached to this document. A copy of the IRPhE Handbook can be obtained on request at http://www.oecd-nea.org/science/wprs/irphe/irphe-handbook/form.html

  14. 'Isotopo' a database application for facile analysis and management of mass isotopomer data.

    Science.gov (United States)

    Ahmed, Zeeshan; Zeeshan, Saman; Huber, Claudia; Hensel, Michael; Schomburg, Dietmar; Münch, Richard; Eylert, Eva; Eisenreich, Wolfgang; Dandekar, Thomas

    2014-01-01

    The composition of stable-isotope labelled isotopologues/isotopomers in metabolic products can be measured by mass spectrometry and supports the analysis of pathways and fluxes. As a prerequisite, the original mass spectra have to be processed, managed and stored to rapidly calculate, analyse and compare isotopomer enrichments to study, for instance, bacterial metabolism in infection. For such applications, we provide here the database application 'Isotopo'. This software package includes (i) a database to store and process isotopomer data, (ii) a parser to upload and translate different data formats for such data and (iii) an improved application to process and convert signal intensities from mass spectra of (13)C-labelled metabolites such as tertbutyldimethylsilyl-derivatives of amino acids. Relative mass intensities and isotopomer distributions are calculated applying a partial least square method with iterative refinement for high precision data. The data output includes formats such as graphs for overall enrichments in amino acids. The package is user-friendly for easy and robust data management of multiple experiments. The 'Isotopo' software is available at the following web link (section Download): http://spp1316.uni-wuerzburg.de/bioinformatics/isotopo/. The package contains three additional files: software executable setup (installer), one data set file (discussed in this article) and one excel file (which can be used to convert data from excel to '.iso' format). The 'Isotopo' software is compatible only with the Microsoft Windows operating system. http://spp1316.uni-wuerzburg.de/bioinformatics/isotopo/. © The Author(s) 2014. Published by Oxford University Press.

  15. Framing and Claiming: How Information-Framing Affects Expected Social Security Claiming Behavior.

    Science.gov (United States)

    Brown, Jeffrey R; Kapteyn, Arie; Mitchell, Olivia S

    2016-03-01

    This paper provides evidence that Social Security benefit claiming decisions are strongly affected by framing and are thus inconsistent with expected utility theory. Using a randomized experiment that controls for both observable and unobservable differences across individuals, we find that the use of a "breakeven analysis" encourages early claiming. Respondents are more likely to delay when later claiming is framed as a gain, and the claiming age is anchored at older ages. Additionally, the financially less literate, individuals with credit card debt, and those with lower earnings are more influenced by framing than others.

  16. TRAM (Transcriptome Mapper: database-driven creation and analysis of transcriptome maps from multiple sources

    Directory of Open Access Journals (Sweden)

    Danieli Gian

    2011-02-01

    Full Text Available Abstract Background Several tools have been developed to perform global gene expression profile data analysis, to search for specific chromosomal regions whose features meet defined criteria as well as to study neighbouring gene expression. However, most of these tools are tailored for a specific use in a particular context (e.g. they are species-specific, or limited to a particular data format and they typically accept only gene lists as input. Results TRAM (Transcriptome Mapper is a new general tool that allows the simple generation and analysis of quantitative transcriptome maps, starting from any source listing gene expression values for a given gene set (e.g. expression microarrays, implemented as a relational database. It includes a parser able to assign univocal and updated gene symbols to gene identifiers from different data sources. Moreover, TRAM is able to perform intra-sample and inter-sample data normalization, including an original variant of quantile normalization (scaled quantile, useful to normalize data from platforms with highly different numbers of investigated genes. When in 'Map' mode, the software generates a quantitative representation of the transcriptome of a sample (or of a pool of samples and identifies if segments of defined lengths are over/under-expressed compared to the desired threshold. When in 'Cluster' mode, the software searches for a set of over/under-expressed consecutive genes. Statistical significance for all results is calculated with respect to genes localized on the same chromosome or to all genome genes. Transcriptome maps, showing differential expression between two sample groups, relative to two different biological conditions, may be easily generated. We present the results of a biological model test, based on a meta-analysis comparison between a sample pool of human CD34+ hematopoietic progenitor cells and a sample pool of megakaryocytic cells. Biologically relevant chromosomal segments and gene

  17. Insulin use and persistence in patients with type 2 diabetes adding mealtime insulin to a basal regimen: a retrospective database analysis

    Directory of Open Access Journals (Sweden)

    Torres Amelito M

    2011-01-01

    Full Text Available Abstract Background The objective of this study was to characterize insulin use and examine factors associated with persistence to mealtime insulin among patients with type 2 diabetes (T2D on stable basal insulin therapy initiating mealtime insulin therapy. Methods Insulin use among patients with T2D initiating mealtime insulin was investigated using Thomson Reuters MarketScan® research databases from July 2001 through September 2006. The first mealtime insulin claim preceded by 6 months with 2 claims for basal insulin was used as the index event. A total of 21 months of continuous health plan enrollment was required. Patients were required to have a second mealtime insulin claim during the 12-month follow-up period. Persistence measure 1 defined non-persistence as the presence of a 90-day gap in mealtime insulin claims, effective the date of the last claim prior to the gap. Persistence measure 2 required 1 claim per quarter to be persistent. Risk factors for non-persistence were assessed using logistic regression. Results Patients initiating mealtime insulin (n = 4752; 51% male, mean age = 60.3 years primarily used vial/syringe (87% and insulin analogs (60%. Patients filled a median of 2, 3, and 4 mealtime insulin claims at 3, 6, and 12 months, respectively, with a median time of 76 days between refills. According to measure 1, persistence to mealtime insulin was 40.7%, 30.2%, and 19.1% at 3, 6, and 12 months, respectively. Results for measure 2 were considerably higher: 74.3%, 55.3%, and 42.2% of patients were persistent at 3, 6, and 12 months, respectively. Initiating mealtime insulin with human insulin was a risk factor for non-persistence by both measures (OR Conclusions Mealtime insulin use and persistence were both considerably lower than expected, and were significantly lower for human insulin compared to analogs.

  18. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    Science.gov (United States)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  19. Temporal compression of soil erosion processes. A regional analysis of USLE database

    International Nuclear Information System (INIS)

    Gonzalez-Hidalgo, J. C.; Luis, M.; Lopez-Bermudez, F.

    2009-01-01

    When John Thornes and Denis Brunsden wrote in 1977 How often one hears the researcher (and no less the undergraduate) complain that after weeks of observation nothing happened only to learn that, the day after his departure, a flood caused unprecedented erosion and channel changes (Thrones and Brunsden, 1977, p. 57), they were focussing to important problems in Geomorphology: the extreme events and time compression of geomorphological processes. Time compression is a fundamental characteristic of geomorphological processes, some times produced by extreme events. Extreme events are rare events, defined by deviation from mean values. But from magnitude-frequency analysis we know that few events, not necessarily extreme, are able to produce a high amount of geomorphological work. finally time compression of geomorphological processes can be focused by the analysis of largest events defined by ranks, not magnitude. We have analysed the effects of largest events on total soil erosion by using 594 erosion plots from USLE database. Plots are located in different climate regions of USA and have different length of records. The 10 largest daily events mean contribution value is 60% of total soil erosion. There exist a relationship between such percentage and total daily erosive events recorded. The pattern seems to be independent of climate conditions. We discuss the nature of such relationship and the implications in soil erosion research. (Author) 17 refs.

  20. Analysis of fuel centre temperatures and fission gas release data from the IFPE Database

    International Nuclear Information System (INIS)

    Schubert, A.; Lassmann, K.; Van Uffelen, P.; Van de Laar, J.; Elenkov, D.; Asenov, S.; Boneva, S.; Djourelov, N.; Georgieva, M.

    2003-01-01

    The present work has continued the analysis of fuel centre temperatures and fission gas release, calculated with standard options of the TRANSURANUS code. The calculations are compared to experimental data from the International Fuel Performance Experiments (IFPE) database. It is reported an analysis regarding UO 2 fuel for Western-type reactors: Fuel centre temperatures measured in the experiments Contact 1 and Contact 2 (in-pile tests of 2 rods performed at the Siloe reactor in Grenoble, France, closely simulating commercial PWR conditions); Fission gas release data derived from post-irradiation examinations of 9 fuel rods belonging to the High-Burnup Effects Programme, task 3 (HBEP3). The results allow for a comparison of predictions by TRANSURANUS for the mentioned Western-type fuels with those done previously for Russian-type WWER fuel. The comparison has been extended to include fuel centre temperatures as well as fission gas release. The present version of TRANSURANUS includes a model that calculates the production of Helium. The amount of produced Helium is compared to the measured and to the calculated release of the fission gases Xenon and Krypton

  1. Marital status independently predicts testis cancer survival--an analysis of the SEER database.

    Science.gov (United States)

    Abern, Michael R; Dude, Annie M; Coogan, Christopher L

    2012-01-01

    Previous reports have shown that married men with malignancies have improved 10-year survival over unmarried men. We sought to investigate the effect of marital status on 10-year survival in a U.S. population-based cohort of men with testis cancer. We examined 30,789 cases of testis cancer reported to the Surveillance, Epidemiology, and End Results (SEER 17) database between 1973 and 2005. All staging were converted to the 1997 AJCC TNM system. Patients less than 18 years of age at time of diagnosis were excluded. A subgroup analysis of patients with stages I or II non-seminomatous germ cell tumors (NSGCT) was performed. Univariate analysis using t-tests and χ(2) tests compared characteristics of patients separated by marital status. Multivariate analysis was performed using a Cox proportional hazard model to generate Kaplan-Meier survival curves, with all-cause and cancer-specific mortality as the primary endpoints. 20,245 cases met the inclusion criteria. Married men were more likely to be older (38.9 vs. 31.4 years), Caucasian (94.4% vs. 92.1%), stage I (73.1% vs. 61.4%), and have seminoma as the tumor histology (57.3% vs. 43.4%). On multivariate analysis, married status (HR 0.58, P married status (HR 0.60, P married and unmarried men (44.8% vs. 43.4%, P = 0.33). Marital status is an independent predictor of improved overall and cancer-specific survival in men with testis cancer. In men with stages I or II NSGCT, RPLND is an additional predictor of improved overall survival. Marital status does not appear to influence whether men undergo RPLND. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. 32 CFR 536.129 - Claims cognizable as UCMJ claims.

    Science.gov (United States)

    2010-07-01

    ... Personnel Claims Act and chapter 11 of AR 27-20, which provides compensation only for tangible personal... 32 National Defense 3 2010-07-01 2010-07-01 true Claims cognizable as UCMJ claims. 536.129 Section 536.129 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY CLAIMS AND ACCOUNTS...

  3. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  4. Analysis of disease-associated objects at the Rat Genome Database

    Science.gov (United States)

    Wang, Shur-Jen; Laulederkind, Stanley J. F.; Hayman, G. T.; Smith, Jennifer R.; Petri, Victoria; Lowry, Timothy F.; Nigam, Rajni; Dwinell, Melinda R.; Worthey, Elizabeth A.; Munzenmaier, Diane H.; Shimoyama, Mary; Jacob, Howard J.

    2013-01-01

    The Rat Genome Database (RGD) is the premier resource for genetic, genomic and phenotype data for the laboratory rat, Rattus norvegicus. In addition to organizing biological data from rats, the RGD team focuses on manual curation of gene–disease associations for rat, human and mouse. In this work, we have analyzed disease-associated strains, quantitative trait loci (QTL) and genes from rats. These disease objects form the basis for seven disease portals. Among disease portals, the cardiovascular disease and obesity/metabolic syndrome portals have the highest number of rat strains and QTL. These two portals share 398 rat QTL, and these shared QTL are highly concentrated on rat chromosomes 1 and 2. For disease-associated genes, we performed gene ontology (GO) enrichment analysis across portals using RatMine enrichment widgets. Fifteen GO terms, five from each GO aspect, were selected to profile enrichment patterns of each portal. Of the selected biological process (BP) terms, ‘regulation of programmed cell death’ was the top enriched term across all disease portals except in the obesity/metabolic syndrome portal where ‘lipid metabolic process’ was the most enriched term. ‘Cytosol’ and ‘nucleus’ were common cellular component (CC) annotations for disease genes, but only the cancer portal genes were highly enriched with ‘nucleus’ annotations. Similar enrichment patterns were observed in a parallel analysis using the DAVID functional annotation tool. The relationship between the preselected 15 GO terms and disease terms was examined reciprocally by retrieving rat genes annotated with these preselected terms. The individual GO term–annotated gene list showed enrichment in physiologically related diseases. For example, the ‘regulation of blood pressure’ genes were enriched with cardiovascular disease annotations, and the ‘lipid metabolic process’ genes with obesity annotations. Furthermore, we were able to enhance enrichment of neurological

  5. Identification of environmentally relevant chemicals in bibliographic databases: a comparative analysis

    DEFF Research Database (Denmark)

    Ellegaard, Ole; Wallin, Johan Albert

    2013-01-01

    takes as its starting point environmentally important chemicals and the retrieval of selectively chosen substances in the four databases: SciFinder, Web of Science (WoS), Scopus and Google Scholar. The way chemical data are stored in the databases plays a major role in the recovery process...

  6. Analysis and Design of Web-Based Database Application for Culinary Community

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2017-03-01

    Full Text Available This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the culinary community. This research used literature review, user interviews, and questionnaires. Moreover, the database system development life cycle was used as a guide for designing a database especially for conceptual database design, logical database design, and physical design database. Web-based application design used eight golden rules for user interface design. The result of this research is the availability of a web-based database application that can fulfill the needs of users in the culinary field related to communication and recipe management.

  7. Performance analysis of a real-time database with optimistic concurrency control

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with Optimistic Concurrency Control (OCC), an approximation for the transaction response-time distribution and thus for the deadline miss probability is obtained. Transactions arrive at the database according to a Poisson process. There is a limited number of

  8. Analysis of Global Horizontal Irradiance in Version 3 of the National Solar Radiation Database.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford; Martin, Curtis E.; Guay, Nathan Gene

    2015-09-01

    We report an analysis that compares global horizontal irradiance (GHI) estimates from version 3 of the National Solar Radiation Database (NSRDB v3) with surface measurements of GHI at a wide variety of locations over the period spanning from 2005 to 2012. The NSRDB v3 estimate of GHI are derived from the Physical Solar Model (PSM) which employs physics-based models to estimate GHI from measurements of reflected visible and infrared irradiance collected by Geostationary Operational Environment Satellites (GOES) and several other data sources. Because the ground measurements themselves are uncertain our analysis does not establish the absolute accuracy for PSM GHI. However by examining the comparison for trends and for consistency across a large number of sites, we may establish a level of confidence in PSM GHI and identify conditions which indicate opportunities to improve PSM. We focus our evaluation on annual and monthly insolation because these quantities directly relate to prediction of energy production from solar power systems. We find that generally, PSM GHI exhibits a bias towards overestimating insolation, on the order of 5% when all sky conditions are considered, and somewhat less (-3%) when only clear sky conditions are considered. The biases persist across multiple years and are evident at many locations. In our opinion the bias originates with PSM and we view as less credible that the bias stems from calibration drift or soiling of ground instruments. We observe that PSM GHI may significantly underestimate monthly insolation in locations subject to broad snow cover. We found examples of days where PSM GHI apparently misidentified snow cover as clouds, resulting in significant underestimates of GHI during these days and hence leading to substantial understatement of monthly insolation. Analysis of PSM GHI in adjacent pixels shows that the level of agreement between PSM GHI and ground data can vary substantially over distances on the order of 2 km. We

  9. RaMP: A Comprehensive Relational Database of Metabolomics Pathways for Pathway Enrichment Analysis of Genes and Metabolites.

    Science.gov (United States)

    Zhang, Bofei; Hu, Senyang; Baskin, Elizabeth; Patt, Andrew; Siddiqui, Jalal K; Mathé, Ewy A

    2018-02-22

    The value of metabolomics in translational research is undeniable, and metabolomics data are increasingly generated in large cohorts. The functional interpretation of disease-associated metabolites though is difficult, and the biological mechanisms that underlie cell type or disease-specific metabolomics profiles are oftentimes unknown. To help fully exploit metabolomics data and to aid in its interpretation, analysis of metabolomics data with other complementary omics data, including transcriptomics, is helpful. To facilitate such analyses at a pathway level, we have developed RaMP (Relational database of Metabolomics Pathways), which combines biological pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Reactome, WikiPathways, and the Human Metabolome DataBase (HMDB). To the best of our knowledge, an off-the-shelf, public database that maps genes and metabolites to biochemical/disease pathways and can readily be integrated into other existing software is currently lacking. For consistent and comprehensive analysis, RaMP enables batch and complex queries (e.g., list all metabolites involved in glycolysis and lung cancer), can readily be integrated into pathway analysis tools, and supports pathway overrepresentation analysis given a list of genes and/or metabolites of interest. For usability, we have developed a RaMP R package (https://github.com/Mathelab/RaMP-DB), including a user-friendly RShiny web application, that supports basic simple and batch queries, pathway overrepresentation analysis given a list of genes or metabolites of interest, and network visualization of gene-metabolite relationships. The package also includes the raw database file (mysql dump), thereby providing a stand-alone downloadable framework for public use and integration with other tools. In addition, the Python code needed to recreate the database on another system is also publicly available (https://github.com/Mathelab/RaMP-BackEnd). Updates for databases in RaMP will be

  10. Mini-DIAL system measurements coupled with multivariate data analysis to identify TIC and TIM simulants: preliminary absorption database analysis

    International Nuclear Information System (INIS)

    Gaudio, P; Malizia, A; Gelfusa, M; Poggi, L.A.; Martinelli, E.; Di Natale, C.; Bellecci, C.

    2017-01-01

    Nowadays Toxic Industrial Components (TICs) and Toxic Industrial Materials (TIMs) are one of the most dangerous and diffuse vehicle of contamination in urban and industrial areas. The academic world together with the industrial and military one are working on innovative solutions to monitor the diffusion in atmosphere of such pollutants. In this phase the most common commercial sensors are based on “point detection” technology but it is clear that such instruments cannot satisfy the needs of the smart cities. The new challenge is developing stand-off systems to continuously monitor the atmosphere. Quantum Electronics and Plasma Physics (QEP) research group has a long experience in laser system development and has built two demonstrators based on DIAL (Differential Absorption of Light) technology could be able to identify chemical agents in atmosphere. In this work the authors will present one of those DIAL system, the miniaturized one, together with the preliminary results of an experimental campaign conducted on TICs and TIMs simulants in cell with aim of use the absorption database for the further atmospheric an analysis using the same DIAL system. The experimental results are analysed with standard multivariate data analysis technique as Principal Component Analysis (PCA) to develop a classification model aimed at identifying organic chemical compound in atmosphere. The preliminary results of absorption coefficients of some chemical compound are shown together pre PCA analysis. (paper)

  11. Updates on the web-based VIOLIN vaccine database and analysis system.

    Science.gov (United States)

    He, Yongqun; Racz, Rebecca; Sayers, Samantha; Lin, Yu; Todd, Thomas; Hur, Junguk; Li, Xinna; Patel, Mukti; Zhao, Boyang; Chung, Monica; Ostrow, Joseph; Sylora, Andrew; Dungarani, Priya; Ulysse, Guerlain; Kochhar, Kanika; Vidri, Boris; Strait, Kelsey; Jourdian, George W; Xiang, Zuoshuang

    2014-01-01

    The integrative Vaccine Investigation and Online Information Network (VIOLIN) vaccine research database and analysis system (http://www.violinet.org) curates, stores, analyses and integrates various vaccine-associated research data. Since its first publication in NAR in 2008, significant updates have been made. Starting from 211 vaccines annotated at the end of 2007, VIOLIN now includes over 3240 vaccines for 192 infectious diseases and eight noninfectious diseases (e.g. cancers and allergies). Under the umbrella of VIOLIN, >10 relatively independent programs are developed. For example, Protegen stores over 800 protective antigens experimentally proven valid for vaccine development. VirmugenDB annotated over 200 'virmugens', a term coined by us to represent those virulence factor genes that can be mutated to generate successful live attenuated vaccines. Specific patterns were identified from the genes collected in Protegen and VirmugenDB. VIOLIN also includes Vaxign, the first web-based vaccine candidate prediction program based on reverse vaccinology. VIOLIN collects and analyzes different vaccine components including vaccine adjuvants (Vaxjo) and DNA vaccine plasmids (DNAVaxDB). VIOLIN includes licensed human vaccines (Huvax) and veterinary vaccines (Vevax). The Vaccine Ontology is applied to standardize and integrate various data in VIOLIN. VIOLIN also hosts the Ontology of Vaccine Adverse Events (OVAE) that logically represents adverse events associated with licensed human vaccines.

  12. An Analysis of Database Replication Technologies with Regard to Deep Space Network Application Requirements

    Science.gov (United States)

    Connell, Andrea M.

    2011-01-01

    The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.

  13. Clinical and technical characteristics of intraoperative radiotherapy. Analysis of the ISIORT-Europe database

    International Nuclear Information System (INIS)

    Krengli, M.; Sedlmayer, F.

    2013-01-01

    Background: A joint analysis of clinical data from centres within the European section of the International Society of Intraoperative Radiation Therapy (ISIORT-Europe) was undertaken in order to define the range of intraoperative radiotherapy (IORT) techniques and indications encompassed by its member institutions. Materials and methods: In 2007, the ISIORT-Europe centres were invited to record demographic, clinical and technical data relating to their IORT procedures in a joint online database. Retrospective data entry was possible. Results: The survey encompassed 21 centres and data from 3754 IORT procedures performed between 1992 and 2011. The average annual number of patients treated per institution was 42, with three centres treating more than 100 patients per year. The most frequent tumour was breast cancer with 2395 cases (63.8 %), followed by rectal cancer (598 cases, 15.9 %), sarcoma (221 cases, 5.9 %), prostate cancer (108 cases, 2.9 %) and pancreatic cancer (80 cases, 2.1 %). Clinical details and IORT technical data from these five tumour types are reported. Conclusion: This is the first report on a large cohort of patients treated with IORT in Europe. It gives a picture of patient selection methods and treatment modalities, with emphasis on the main tumour types that are typically treated by this technique and may benefit from it. (orig.)

  14. FISH REPRODUCTION: BIBLIOMETRIC ANALYSIS OF WORLDWIDE AND BRAZILIAN PUBLICATIONS IN SCOPUS DATABASE

    Directory of Open Access Journals (Sweden)

    Marcella Costa RADAEL

    2015-12-01

    Full Text Available Reproduction is a fundamental part of life being and studies related to fish reproduction have been much accessed. The aim of this study was to perform a bibliometric analysis in intend to identify trends in this kind of publication. During June 2013, were performed searches on Scopus Database, using the term “fish reproduction”, being compiled and presented information related to the number of publications per year, number of publications by country, publications by author, by journal, by institution and most used keywords. Based on the study, it was possible to obtain the following results: Brazil occupies a highlight position in number of papers, being that the Brazilian participation compared to worldwide publishing production is having an exponential increase; in Brazil, there is a high concentration of articles when concerning the top 10 authors and institutions. The present study allows verifying that the term “fish reproduction” has been focused by many scientific papers, being that in Brazil there is a special research effort related to this subject, especially in the last few years. The main contribution concerns to the use of bibliometric methods to describe the growth and concentration of researches in the area of fishfarm and reproduction.

  15. Effect of anaesthesia type on postoperative mortality and morbidities: a matched analysis of the NSQIP database.

    Science.gov (United States)

    Saied, N N; Helwani, M A; Weavind, L M; Shi, Y; Shotwell, M S; Pandharipande, P P

    2017-01-01

    The anaesthetic technique may influence clinical outcomes, but inherent confounding and small effect sizes makes this challenging to study. We hypothesized that regional anaesthesia (RA) is associated with higher survival and fewer postoperative organ dysfunctions when compared with general anaesthesia (GA). We matched surgical procedures and type of anaesthesia using the US National Surgical Quality Improvement database, in which 264,421 received GA and 64,119 received RA. Procedures were matched according to Current Procedural Terminology (CPT) and ASA physical status classification. Our primary outcome was 30-day postoperative mortality and secondary outcomes were hospital length of stay, and postoperative organ system dysfunction. After matching, multiple regression analysis was used to examine associations between anaesthetic type and outcomes, adjusting for covariates. After matching and adjusting for covariates, type of anaesthesia did not significantly impact 30-day mortality. RA was significantly associated with increased likelihood of early discharge (HR 1.09; Ppatient characteristic confounders, RA was associated with significantly lower odds of several postoperative complications, decreased hospital length of stay, but not mortality when compared with GA. © The Author 2016. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Evaluation of the TRPM2 channel as a biomarker in breast cancer using public databases analysis.

    Science.gov (United States)

    Sumoza-Toledo, Adriana; Espinoza-Gabriel, Mario Iván; Montiel-Condado, Dvorak

    Breast cancer is one of the most common malignancies affecting women. Recent investigations have revealed a major role of ion channels in cancer. The transient receptor potential melastatin-2 (TRPM2) is a plasma membrane and lysosomal channel with important roles in cell migration and cell death in immune cells and tumor cells. In this study, we investigated the prognostic value of TRPM2 channel in breast cancer, analyzing public databases compiled in Oncomine™ (Thermo Fisher, Ann Arbor, MI) and online Kaplan-Meier Plotter platforms. The results revealed that TRPM2 mRNA overexpression is significant in situ and invasive breast carcinoma compared to normal breast tissue. Furthermore, multi-gene validation using Oncomine™ showed that this channel is coexpressed with proteins related to cellular migration, transformation, and apoptosis. On the other hand, Kaplan-Meier analysis exhibited that low expression of TRPM2 could be used to predict poor outcome in ER- and HER2+ breast carcinoma patients. TRPM2 is a promising biomarker for aggressiveness of breast cancer, and a potential target for the development of new therapies. Copyright © 2016 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  17. CLIPZ: a database and analysis environment for experimentally determined binding sites of RNA-binding proteins.

    Science.gov (United States)

    Khorshid, Mohsen; Rodak, Christoph; Zavolan, Mihaela

    2011-01-01

    The stability, localization and translation rate of mRNAs are regulated by a multitude of RNA-binding proteins (RBPs) that find their targets directly or with the help of guide RNAs. Among the experimental methods for mapping RBP binding sites, cross-linking and immunoprecipitation (CLIP) coupled with deep sequencing provides transcriptome-wide coverage as well as high resolution. However, partly due to their vast volume, the data that were so far generated in CLIP experiments have not been put in a form that enables fast and interactive exploration of binding sites. To address this need, we have developed the CLIPZ database and analysis environment. Binding site data for RBPs such as Argonaute 1-4, Insulin-like growth factor II mRNA-binding protein 1-3, TNRC6 proteins A-C, Pumilio 2, Quaking and Polypyrimidine tract binding protein can be visualized at the level of the genome and of individual transcripts. Individual users can upload their own sequence data sets while being able to limit the access to these data to specific users, and analyses of the public and private data sets can be performed interactively. CLIPZ, available at http://www.clipz.unibas.ch, aims to provide an open access repository of information for post-transcriptional regulatory elements.

  18. PGSB/MIPS PlantsDB Database Framework for the Integration and Analysis of Plant Genome Data.

    Science.gov (United States)

    Spannagl, Manuel; Nussbaumer, Thomas; Bader, Kai; Gundlach, Heidrun; Mayer, Klaus F X

    2017-01-01

    Plant Genome and Systems Biology (PGSB), formerly Munich Institute for Protein Sequences (MIPS) PlantsDB, is a database framework for the integration and analysis of plant genome data, developed and maintained for more than a decade now. Major components of that framework are genome databases and analysis resources focusing on individual (reference) genomes providing flexible and intuitive access to data. Another main focus is the integration of genomes from both model and crop plants to form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny). Data exchange and integrated search functionality with/over many plant genome databases is provided within the transPLANT project.

  19. Oncological Outcomes After Robotic Proctectomy for Rectal Cancer: Analysis of a Prospective Database.

    Science.gov (United States)

    Sammour, Tarik; Malakorn, Songphol; Bednarski, Brian K; Kaur, Harmeet; Shin, Ui Sup; Messick, Craig; You, Yi-Qian Nancy; Chang, George J

    2018-03-01

    The aim of this study is to evaluate the oncological outcomes of robotic total mesorectal excision (TME) at an NCI designated cancer center. The effectiveness of laparoscopic TME could not be established, but the robotic-assisted approach may hold some promise, with improved visualization and ergonomics for pelvic dissection. Oncological outcome data is presently lacking. Patients who underwent total mesorectal excision or tumor-specific mesorectal excision for rectal cancer between April 2009 and April 2016 via a robotic approach were identified from a prospective single-institution database. The circumferential resection margin (CRM), distal resection margin, and TME completeness rates were determined. Kaplan-Meier analysis of disease-free survival and overall survival was performed for all patients treated with curative intent. A total of 276 patients underwent robotic proctectomy during the study period. Robotic surgery was performed initially by 1 surgeon with 3 additional surgeons progressively transitioning from open to robotic during the study period with annual increase in the total number of cases performed robotically. Seven patients had involved circumferential resection margins (2.5%), and there were no positive distal or proximal resection margins. One hundred eighty-six patients had TME quality assessed, and only 1 patient (0.5%) had an incomplete TME. Eighty-three patients were followed up for a minimum of 3 years, with a local recurrence rate of 2.4%, and a distant recurrence rate of 16.9%. Five-year disease-free survival on Kaplan-Meier analysis was 82%, and 5-year overall survival was 87%. Robotic proctectomy for rectal cancer can be performed with good short and medium term oncological outcomes in selected patients.

  20. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    Science.gov (United States)

    2005-01-01

    The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  1. A database for on-line event analysis on a distributed memory machine

    CERN Document Server

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    Parallel in-memory databases can enhance the structuring and parallelization of programs used in High Energy Physics (HEP). Efficient database access routines are used as communication primitives which hide the communication topology in contrast to the more explicit communications like PVM or MPI. A parallel in-memory database, called SPIDER, has been implemented on a 32 node Meiko CS-2 distributed memory machine. The spider primitives generate a lower overhead than the one generated by PVM or PMI. The event reconstruction program, CPREAD of the CPLEAR experiment, has been used as a test case. Performance measurerate generated by CPLEAR.

  2. RADARS, a bioinformatics solution that automates proteome mass spectral analysis, optimises protein identification, and archives data in a relational database.

    Science.gov (United States)

    Field, Helen I; Fenyö, David; Beavis, Ronald C

    2002-01-01

    RADARS, a rapid, automated, data archiving and retrieval software system for high-throughput proteomic mass spectral data processing and storage, is described. The majority of mass spectrometer data files are compatible with RADARS, for consistent processing. The system automatically takes unprocessed data files, identifies proteins via in silico database searching, then stores the processed data and search results in a relational database suitable for customized reporting. The system is robust, used in 24/7 operation, accessible to multiple users of an intranet through a web browser, may be monitored by Virtual Private Network, and is secure. RADARS is scalable for use on one or many computers, and is suited to multiple processor systems. It can incorporate any local database in FASTA format, and can search protein and DNA databases online. A key feature is a suite of visualisation tools (many available gratis), allowing facile manipulation of spectra, by hand annotation, reanalysis, and access to all procedures. We also described the use of Sonar MS/MS, a novel, rapid search engine requiring 40 MB RAM per process for searches against a genomic or EST database translated in all six reading frames. RADARS reduces the cost of analysis by its efficient algorithms: Sonar MS/MS can identifiy proteins without accurate knowledge of the parent ion mass and without protein tags. Statistical scoring methods provide close-to-expert accuracy and brings robust data analysis to the non-expert user.

  3. Patient characteristics of smokers undergoing lumbar spine surgery: an analysis from the Quality Outcomes Database.

    Science.gov (United States)

    Asher, Anthony L; Devin, Clinton J; McCutcheon, Brandon; Chotai, Silky; Archer, Kristin R; Nian, Hui; Harrell, Frank E; McGirt, Matthew; Mummaneni, Praveen V; Shaffrey, Christopher I; Foley, Kevin; Glassman, Steven D; Bydon, Mohamad

    2017-12-01

    OBJECTIVE In this analysis the authors compare the characteristics of smokers to nonsmokers using demographic, socioeconomic, and comorbidity variables. They also investigate which of these characteristics are most strongly associated with smoking status. Finally, the authors investigate whether the association between known patient risk factors and disability outcome is differentially modified by patient smoking status for those who have undergone surgery for lumbar degeneration. METHODS A total of 7547 patients undergoing degenerative lumbar surgery were entered into a prospective multicenter registry (Quality Outcomes Database [QOD]). A retrospective analysis of the prospectively collected data was conducted. Patients were dichotomized as smokers (current smokers) and nonsmokers. Multivariable logistic regression analysis fitted for patient smoking status and subsequent measurement of variable importance was performed to identify the strongest patient characteristics associated with smoking status. Multivariable linear regression models fitted for 12-month Oswestry Disability Index (ODI) scores in subsets of smokers and nonsmokers was performed to investigate whether differential effects of risk factors by smoking status might be present. RESULTS In total, 18% (n = 1365) of patients were smokers and 82% (n = 6182) were nonsmokers. In a multivariable logistic regression analysis, the factors significantly associated with patients' smoking status were sex (p smoker (p = 0.0008), while patients with coronary artery disease had greater odds of being a smoker (p = 0.044). Patients' propensity for smoking was also significantly associated with higher American Society of Anesthesiologists (ASA) class (p smokers and nonsmokers. CONCLUSIONS Using a large, national, multiinstitutional registry, the authors described the profile of patients who undergo lumbar spine surgery and its association with their smoking status. Compared with nonsmokers, smokers were younger, male

  4. Repeat workers' compensation claims: risk factors, costs and work disability

    Science.gov (United States)

    2011-01-01

    Background The objective of our study was to describe factors associated with repeat workers' compensation claims and to compare the work disability arising in workers with single and multiple compensation claims. Methods All initial injury claims lodged by persons of working age during a five year period (1996 to 2000) and any repeat claims were extracted from workers' compensation administrative data in the state of Victoria, Australia. Groups of workers with single and multiple claims were identified. Descriptive analysis of claims by affliction, bodily location, industry segment, occupation, employer and workplace was undertaken. Survival analysis determined the impact of these variables on the time between the claims. The economic impact and duration of work incapacity associated with initial and repeat claims was compared between groups. Results 37% of persons with an initial claim lodged a second claim. This group contained a significantly greater proportion of males, were younger and more likely to be employed in manual occupations and high-risk industries than those with single claims. 78% of repeat claims were for a second injury. Duration between the claims was shortest when the working conditions had not changed. The initial claims of repeat claimants resulted in significantly (p claims. Conclusions A substantial proportion of injured workers experience a second occupational injury or disease. These workers pose a greater economic burden than those with single claims, and also experience a substantially greater cumulative period of work disability. There is potential to reduce the social, health and economic burden of workplace injury by enacting prevention programs targeted at these workers. PMID:21696637

  5. Development of a reference database for ion beam analysis. Summary report of the first research coordination meeting

    International Nuclear Information System (INIS)

    Vickridge, I.; Schwerer, O.

    2006-01-01

    A summary is given of the First Research Coordination Meeting on the Development of a Reference Database for Ion Beam Analysis, including background information, objectives, recommendations for measurements, and a list of tasks assigned to participants. The next research co-ordination meeting will be held in May 2007. (author)

  6. Analysis of Users' Searches of CD-ROM Databases in the National and University Library in Zagreb.

    Science.gov (United States)

    Jokic, Maja

    1997-01-01

    Investigates the search behavior of CD-ROM database users in Zagreb (Croatia) libraries: one group needed a minimum of technical assistance, and the other was completely independent. Highlights include the use of questionnaires and transaction log analysis and the need for end-user education. The questionnaire and definitions of search process…

  7. Incidence and prevalence rates of diabetes mellitus in Taiwan: Analysis of the 2000–2009 Nationwide Health Insurance database

    Directory of Open Access Journals (Sweden)

    Yi-Der Jiang

    2012-11-01

    Conclusion: The incidence of diabetes, including type 1, remained stable over this 10-year period in Taiwan. However, the incidence rate in men aged 20–59 years was higher than that in age-matched women. With our nationwide database, subgroup analysis of DM incidence can be performed to refine our health policies for the prevention, screening, and treatment of diabetes mellitus.

  8. The analysis of long-term changes in plant communities using large databases: the effect of stratified resampling.

    NARCIS (Netherlands)

    Haveman, R.; Janssen, J.A.M.

    2008-01-01

    Question: Releves in large phytosociological databases used for analysing long-term changes in plant communities are biased towards easily accessible places and species-rich stands. How does this bias influence trend analysis of floristic composition within a priori determined vegetation types and

  9. Development of a database for prompt γ-ray neutron activation analysis. Summary report of the first research coordination meeting

    International Nuclear Information System (INIS)

    Paviotti-Corcuera, R.; Lindstrom, R.M.

    2000-02-01

    This report summarizes the presentations, recommendations and conclusions of the First Research Co-ordination Meeting on Development of a Database for Prompt γ-ray Neutron Activation Analysis. Neutron-capture Prompt γ-ray Activation Analysis (PGAA) is a non-destructive radioanalytical method, capable of rapid or in-situ simultaneous multielement analysis of many elements of the Periodic Table, from hydrogen to uranium, in the same sample. Inaccuracy and incompleteness of the data available for use in PGAA are a significant handicap in the qualitative and quantitative analysis of capture-gamma spectra. The goal of this CRP is to replace the twenty-year-old data from a single laboratory with something fundamentally new: an evaluated database which includes a combination of evaluated nuclear physics data, physical theory, and recent measurements. The resulting database will be comparable in quality with that for radioactive decay. In addition, more accurate values of neutron capture cross-sections and γ-ray intensities that result from this database will improve the accuracy of radiation shielding calculations. (author)

  10. Non-Price Competition and the Structure of the Online Information Industry: Q-Analysis of Medical Databases and Hosts.

    Science.gov (United States)

    Davies, Roy

    1987-01-01

    Discussion of the online information industry emphasizes the effects of non-price competition on its structure and the firms involved. Q-analysis is applied to data on medical databases and hosts, changes over a three-year period are identified, and an optimum structure for the industry based on economic theory is considered. (Author/LRW)

  11. Analysis of User Need with CD-ROM Databases: A Case Study Based on Work Sampling at One University Library.

    Science.gov (United States)

    Wells, Amy Tracy

    Analysis of the needs of users of Compact Disk-Read Only Memory (CD-ROM) was performed at the Tampa campus of the University of South Florida. A review of the literature indicated that problems associated with selecting the appropriate database, searching, and requiring technical assistance were the probable areas of user need. The library has 17…

  12. CGPD: Cancer Genetics and Proteomics Database - A Dataset for Computational Analysis and Online Cancer Diagnostic Centre

    Directory of Open Access Journals (Sweden)

    Muhammad Rizwan Riaz

    2014-06-01

    Full Text Available Cancer Genetics and Proteomics Database (CGPD is a repository for genetics and proteomics data of those Homo sapiens genes which are involved in Cancer. These genes are categorized in the database on the basis of cancer type. 72 genes of 13 types of cancers are considered in this database yet. Primers, promoters and peptides of these genes are also made available. Primers provided for each gene, with their features and conditions given to facilitate the researchers, are useful in PCR amplification, especially in cloning experiments. CGPD also contains Online Cancer Diagnostic Center (OCDC. It also contains transcription and translation tools to assist research work in progressive manner. The database is publicly available at http://www.cgpd.comyr.com.

  13. An analysis of extended entity relationship constructs extraction in database reverse engineering approaches

    International Nuclear Information System (INIS)

    Jilani, M.A.; Aziz, A.; Hussain, T.

    2008-01-01

    Database reverse Engineering is technique used for transforming relational schema into a conceptual schema for finding and fixing design flaw for maintaining and re-engineering database systems for the integration of database system with another and migration of a database system from one platform to another. We studied the approaches from year 1993 to 2006 to find out which EER construct cannot be retrieved by most of the DBRE approaches so that they can be retrieved in the future. For each EER construct that can be retrieved by using a given DBRE approach. We show whether they are retrieved without user involvement (automatically). Partial user involvement (semi-automatically) or full user involvement (manually). We also discuss the relevant advantages and limitations of each DBRE technique considered in this paper. (author)

  14. Defense Spending Databases for Countries in the Asia-Pacific Region: An Analysis and Comparison

    National Research Council Canada - National Science Library

    Reuning, Charles

    2001-01-01

    The purpose of this research was to identify and analyze a select number of unclassified databases that cover defense spending and other defense related criteria for countries in the Asia-Pacific region...

  15. An Analysis of an Ultra-High Speed Content-Addressable Database Retrieval System

    National Research Council Canada - National Science Library

    Costianes, Peter

    2001-01-01

    ...) and its implementation as a high speed optical chip. The paradigm uses polarization states to represent binary very words and EO modulators to represent database words to perform what is essentially XOR operations...

  16. The Toxin and Virulence Database: A Resource for Signature Development and Analysis of Virulence

    National Research Council Canada - National Science Library

    Wolinsky, Murray A

    2004-01-01

    In this joint effort with the University of Alabama at Birmingham, Walter Reed, MITRE and USAMRIID, we are developing a comprehensive database for microbial toxins and virulence factors (www.tvfac.lanl.gov...

  17. Analysis and Design of Web-Based Database Application for Culinary Community

    OpenAIRE

    Huda, Choirul; Awang, Osel Dharmawan; Raymond, Raymond; Raynaldi, Raynaldi

    2017-01-01

    This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the cu...

  18. dbMDEGA: a database for meta-analysis of differentially expressed genes in autism spectrum disorder.

    Science.gov (United States)

    Zhang, Shuyun; Deng, Libin; Jia, Qiyue; Huang, Shaoting; Gu, Junwang; Zhou, Fankun; Gao, Meng; Sun, Xinyi; Feng, Chang; Fan, Guangqin

    2017-11-16

    Autism spectrum disorders (ASD) are hereditary, heterogeneous and biologically complex neurodevelopmental disorders. Individual studies on gene expression in ASD cannot provide clear consensus conclusions. Therefore, a systematic review to synthesize the current findings from brain tissues and a search tool to share the meta-analysis results are urgently needed. Here, we conducted a meta-analysis of brain gene expression profiles in the current reported human ASD expression datasets (with 84 frozen male cortex samples, 17 female cortex samples, 32 cerebellum samples and 4 formalin fixed samples) and knock-out mouse ASD model expression datasets (with 80 collective brain samples). Then, we applied R language software and developed an interactive shared and updated database (dbMDEGA) displaying the results of meta-analysis of data from ASD studies regarding differentially expressed genes (DEGs) in the brain. This database, dbMDEGA ( https://dbmdega.shinyapps.io/dbMDEGA/ ), is a publicly available web-portal for manual annotation and visualization of DEGs in the brain from data from ASD studies. This database uniquely presents meta-analysis values and homologous forest plots of DEGs in brain tissues. Gene entries are annotated with meta-values, statistical values and forest plots of DEGs in brain samples. This database aims to provide searchable meta-analysis results based on the current reported brain gene expression datasets of ASD to help detect candidate genes underlying this disorder. This new analytical tool may provide valuable assistance in the discovery of DEGs and the elucidation of the molecular pathogenicity of ASD. This database model may be replicated to study other disorders.

  19. Medicaid Expenditures for Fee-for-Service Enrollees with Behavioral Diagnoses: Findings from a 50 State Claims Analysis.

    Science.gov (United States)

    Ward, Martha C; Lally, Cathy; Druss, Benjamin G

    2017-01-01

    Medicaid is an important funder of care for individuals with behavioral (psychiatric and/or substance use) diagnoses, and expenditures will likely increase with expansion of services under the Affordable Care Act. This study provides national estimates of Medicaid expenditures using a comprehensive sample of fee-for-service Medicaid enrollees with behavioral diagnoses. Data for analysis came from 2003 to 2004 Medicaid Analytic eXtract (MAX) files for 50 states and the District of Columbia. Individuals with behavioral diagnoses had high rates of chronic medical comorbidities, and expenditures for medical (non-behavioral) diagnoses accounted for 74 % of their health care expenditures. Total Medicaid expenditure was approximately 15 billion dollars (equivalent to 18.91 billion in 2016 dollars) for individuals with any behavioral diagnosis. Medicaid fee-for-service beneficiaries with behavioral diagnoses have a high treated prevalence of individual medical comorbid conditions, and the majority of health care expenditures in these individuals are for medical, rather than behavioral health, services.

  20. Ethnographic analysis of traumatic brain injury patients in the national Model Systems database.

    Science.gov (United States)

    Burnett, Derek M; Kolakowsky-Hayner, Stephanie A; Slater, Dan; Stringer, Anthony; Bushnik, Tamara; Zafonte, Ross; Cifu, David X

    2003-02-01

    To compare demographics, injury characteristics, therapy service and intensity, and outcome in minority versus nonminority patients with traumatic brain injury (TBI). Retrospective analysis. Twenty medical centers. Two thousand twenty patients (men, n=1,518; women, n=502; nonminority, n=1,168; minority, n=852) with TBI enrolled in the Traumatic Brain Injury Model Systems database. Not applicable. Age, gender, marital status, education, employment status, injury severity (based on Glasgow Coma Scale [GCS] admission score, length of posttraumatic amnesia, duration of unconsciousness), intensity (hours) of therapy rendered, rehabilitation length of stay (LOS), rehabilitation charges, discharge disposition, postinjury employment status, FIM instrument change scores, and FIM efficiency scores. Independent sample t tests were used to analyze continuous variables; chi-square analyses were used to evaluate categorical data. overall, minorities were found to be mostly young men who were single, unemployed, and less well educated, with a longer work week if employed when injured. motor vehicle crashes (MVCs) predominated as the cause of injury for both groups; however, minorities were more likely to sustain injury from acts of violence and auto-versus-pedestrian crashes. Minorities also had higher GCS scores on admission and shorter LOS. Rehabilitation services: significant differences were found in the types and intensity of rehabilitation services provided; these included physical therapy, occupational therapy, and speech-language pathology, but not psychology. Minority patients who sustain TBI generally tend to be young men with less social responsibility. Although MVCs predominate as the primary etiology, acts of violence and auto-versus-pedestrian incidents are more common in the minority population. Minorities tend to have higher GCS scores at admission. Also, the type and intensity of rehabilitation services provided differed significantly for the various

  1. Further potential savings attributable to maximum generic substitution of antidepressants in South Africa: A retrospective analysis of medical claims

    Directory of Open Access Journals (Sweden)

    Elmarie van der Westhuizen

    2010-11-01

    Opsomming Die hoofmikpunt van hierdie studie was om die potensiële kostebesparing te bereken wat deur maksimale generiese vervanging van antidepressante in die Suid-Afrikaanse private gesondheidsorgsektor tussen 2004 en 2006 teweeggebring sou kon word. Data oor gerekenariseerde eise vir medisyne van pasiënte wat een of meer antidepressante gedurende die studietydperk ontvang het (d.i. 2004, 2005 en 2006 is van ʼn Suid-Afrikaanse maatskappy wat farmaseutiese voordele bestuur, verkry. Die totale studiepopulasie het bestaan uit 292 071 items (N = 5 982 869van 273 673 voorskrifte (N = 5 213 765 teen ʼn totale koste van R56 183 697.91 (N = R1 346 210 929.00.’n Kwantitatiewe, retrospektiewe medisyneverbruiksontleding is gedoen en data is geanaliseer deur van die Statistical Analysis System®-pakket gebruik te maak. Potensiële kostebesparings is vir middels in die studiepopulasie wat aan die kriteria voldoen het, bereken. Generiese produkte het 58.7% (N = 292 071 van alle produkte wat voorgeskryf is, uitgemaak, teen ʼn totale koste van 28.2% (N= R1 346 210 929.00. Indien die gemiddelde prys van alle middels wat aan die kriteria vir vervanging voldoen het, met die prys vir generiese middels vervang word, is ʼn potensiële besparing van 9.3%(N = R56 183 697.91 van die werklike koste vir antidepressante gedurende die studietydperk moontlik. Generiese middels kan in ontwikkelende lande met beperkte gesondheidsorg-hulpbronne kostebesparende alternatiewe wees.

  2. Thermodynamic analysis for molten stratification test MASCA with ionic liquid U-Zr-Fe-O-B-C-FPs database

    International Nuclear Information System (INIS)

    Fukasawa, Masanori; Tamura, Shigeyuki

    2007-01-01

    The molten corium stratification tested in the OECD MASCA project was analyzed with our thermo-dynamic database and the database was verified to be effective for the stratification analysis. The MASCA test shows that the molten corium can be stratified with the metal layer under the oxide when sub-oxidized corium including iron was retained in the lower head of the reactor vessel. This stratification is caused by the increased density of the metal layer attributed to a transfer of uranium metal that was reduced from uranium oxide by zirconium. Thermodynamic equilibrium calculations with the database, which was developed for the corium U-Zr-Fe-O-B-C-FPs system using the ionic two-sublattice model for liquid, show quantitative agreements with the MASCA test, such as the composition of each layer, fission product (FP) partitioning between the layers and B 4 C effect on the stratification. (author)

  3. PmagPy: Software Package for Paleomagnetic Data Analysis and Gateway to the Magnetics Information Consortium (MagIC) Database

    Science.gov (United States)

    Jonestrask, L.; Tauxe, L.; Shaar, R.; Jarboe, N.; Minnett, R.; Koppers, A. A. P.

    2014-12-01

    There are many data types and methods of analysis in rock and paleomagnetic investigations. The MagIC database (http://earthref.org/MAGIC) was designed to accommodate the vast majority of data used in such investigations. Yet getting data from the laboratory into the database, and visualizing and re-analyzing data downloaded from the database, makes special demands on data formatting. There are several recently published programming packages that deal with single types of data: demagnetization experiments (e.g., Lurcock et al., 2012), paleointensity experiments (e.g., Leonhardt et al., 2004), and FORC diagrams (e.g., Harrison et al., 2008). However, there is a need for a unified set of open source, cross-platform software that deals with the great variety of data types in a consistent way and facilitates importing data into the MagIC format, analyzing them and uploading them into the MagIC database. The PmagPy software package (http://earthref.org/PmagPy/cookbook/) comprises a such a comprehensive set of tools. It facilitates conversion of many laboratory formats into the common MagIC format and allows interpretation of demagnetization and Thellier-type experimental data. With some 175 programs and over 250 functions, it can be used to create a wide variety of plots and allows manipulation of downloaded data sets as well as preparation of new contributions for uploading to the MagIC database.

  4. A Reference Viral Database (RVDB) To Enhance Bioinformatics Analysis of High-Throughput Sequencing for Novel Virus Detection.

    Science.gov (United States)

    Goodacre, Norman; Aljanahi, Aisha; Nandakumar, Subhiksha; Mikailov, Mike; Khan, Arifa S

    2018-01-01

    Detection of distantly related viruses by high-throughput sequencing (HTS) is bioinformatically challenging because of the lack of a public database containing all viral sequences, without abundant nonviral sequences, which can extend runtime and obscure viral hits. Our reference viral database (RVDB) includes all viral, virus-related, and virus-like nucleotide sequences (excluding bacterial viruses), regardless of length, and with overall reduced cellular sequences. Semantic selection criteria (SEM-I) were used to select viral sequences from GenBank, resulting in a first-generation viral database (VDB). This database was manually and computationally reviewed, resulting in refined, semantic selection criteria (SEM-R), which were applied to a new download of updated GenBank sequences to create a second-generation VDB. Viral entries in the latter were clustered at 98% by CD-HIT-EST to reduce redundancy while retaining high viral sequence diversity. The viral identity of the clustered representative sequences (creps) was confirmed by BLAST searches in NCBI databases and HMMER searches in PFAM and DFAM databases. The resulting RVDB contained a broad representation of viral families, sequence diversity, and a reduced cellular content; it includes full-length and partial sequences and endogenous nonretroviral elements, endogenous retroviruses, and retrotransposons. Testing of RVDBv10.2, with an in-house HTS transcriptomic data set indicated a significantly faster run for virus detection than interrogating the entirety of the NCBI nonredundant nucleotide database, which contains all viral sequences but also nonviral sequences. RVDB is publically available for facilitating HTS analysis, particularly for novel virus detection. It is meant to be updated on a regular basis to include new viral sequences added to GenBank. IMPORTANCE To facilitate bioinformatics analysis of high-throughput sequencing (HTS) data for the detection of both known and novel viruses, we have

  5. BenefitClaimWebServiceBean/BenefitClaimWebService

    Data.gov (United States)

    Department of Veterans Affairs — A formal or informal request for a type of monetary or non-monetary benefit. This service provides benefit claims and benefit claim special issues data, allows the...

  6. Pathway analysis for intracellular Porphyromonas gingivalis using a strain ATCC 33277 specific database

    Directory of Open Access Journals (Sweden)

    Wang Tiansong

    2009-09-01

    Full Text Available Abstract Background Porphyromonas gingivalis is a Gram-negative intracellular pathogen associated with periodontal disease. We have previously reported on whole-cell quantitative proteomic analyses to investigate the differential expression of virulence factors as the organism transitions from an extracellular to intracellular lifestyle. The original results with the invasive strain P. gingivalis ATCC 33277 were obtained using the genome sequence available at the time, strain W83 [GenBank: AE015924]. We present here a re-processed dataset using the recently published genome annotation specific for strain ATCC 33277 [GenBank: AP009380] and an analysis of differential abundance based on metabolic pathways rather than individual proteins. Results Qualitative detection was observed for 1266 proteins using the strain ATCC 33277 annotation for 18 hour internalized P. gingivalis within human gingival epithelial cells and controls exposed to gingival cell culture medium, an improvement of 7% over the W83 annotation. Internalized cells showed increased abundance of proteins in the energy pathway from asparagine/aspartate amino acids to ATP. The pathway producing one short chain fatty acid, propionate, showed increased abundance, while that of another, butyrate, trended towards decreased abundance. The translational machinery, including ribosomal proteins and tRNA synthetases, showed a significant increase in protein relative abundance, as did proteins responsible for transcription. Conclusion Use of the ATCC 33277 specific genome annotation resulted in improved proteome coverage with respect to the number of proteins observed both qualitatively in terms of protein identifications and quantitatively in terms of the number of calculated abundance ratios. Pathway analysis showed a significant increase in overall protein synthetic and transcriptional machinery in the absence of significant growth. These results suggest that the interior of host cells

  7. Health care resource use and costs associated with possible side effects of high oral corticosteroid use in asthma: a claims-based analysis

    Directory of Open Access Journals (Sweden)

    Luskin AT

    2016-10-01

    Full Text Available Allan T Luskin,1 Evgeniya N Antonova,2 Michael S Broder,3 Eunice Y Chang,3 Theodore A Omachi,2 Dennis K Ledford4 1HealthyAirways, Madison, WI, 2Genentech, Inc., South San Francisco, 3Partnership for Health Analytic Research, LLC, Beverly Hills, CA, 4Division of Allergy and Immunology, Department of Medicine, James A. Haley Veterans’ Hospital, Morsani College of Medicine, University of South Florida, Tampa, FL, USA Background: The objective of this study was to estimate the prevalence of possible oral corticosteroid (OCS-related side effects and health care resource use and costs in patients with asthma.Methods: This was a cross-sectional, matched-cohort, retrospective study using a commercial claims database. Adults with asthma diagnosis codes and evidence of asthma medication use were studied. Patients with high OCS use (≥30 days of OCS annually were divided into those who did versus those who did not experience OCS-related possible side effects. Their health care resource use and costs were compared using linear regression or negative binomial regression models, adjusting for age, sex, geographic region, Charlson Comorbidity Index score, and chronic obstructive pulmonary disease status.Results: After adjustment, high OCS users with possible side effects were more likely to have office visits (23.0 vs 19.6; P<0.001 and hospitalizations (0.44 vs 0.22; P<0.001 than those without possible side effects. Emergency department visits were similar between the groups. High OCS users with possible side effects had higher adjusted total annual mean health care costs ($25,168 than those without such side effects ($21,882; P=0.009.Conclusion: Among high OCS users, patients with possible OCS-related side effects are more likely to use health care services than those without such side effects. Although OCS may help control asthma and manage exacerbations, OCS side effects may result in additional health care resource use and costs, highlighting the need

  8. 'The surface management system' (SuMS) database: a surface-based database to aid cortical surface reconstruction, visualization and analysis

    Science.gov (United States)

    Dickson, J.; Drury, H.; Van Essen, D. C.

    2001-01-01

    Surface reconstructions of the cerebral cortex are increasingly widely used in the analysis and visualization of cortical structure, function and connectivity. From a neuroinformatics perspective, dealing with surface-related data poses a number of challenges. These include the multiplicity of configurations in which surfaces are routinely viewed (e.g. inflated maps, spheres and flat maps), plus the diversity of experimental data that can be represented on any given surface. To address these challenges, we have developed a surface management system (SuMS) that allows automated storage and retrieval of complex surface-related datasets. SuMS provides a systematic framework for the classification, storage and retrieval of many types of surface-related data and associated volume data. Within this classification framework, it serves as a version-control system capable of handling large numbers of surface and volume datasets. With built-in database management system support, SuMS provides rapid search and retrieval capabilities across all the datasets, while also incorporating multiple security levels to regulate access. SuMS is implemented in Java and can be accessed via a Web interface (WebSuMS) or using downloaded client software. Thus, SuMS is well positioned to act as a multiplatform, multi-user 'surface request broker' for the neuroscience community.

  9. Analysis And Database Design from Import And Export Reporting in Company in Indonesia

    Directory of Open Access Journals (Sweden)

    Novan Zulkarnain

    2016-03-01

    Full Text Available Director General of Customs and Excise (DJBC is a government agency that oversees exports and imports in Indonesia. Companies that receive exemption and tax returns are required to wipe Orkan  activities export and import by using IT-based reporting. This study aimed to analyze and design  databases to support the reporting of customs based report format for Director General of Customs and Excise No. PER-09/BC/2014. Data collection used was the Fact Finding Techniques consisted of studying the documents, interviews, observation, and literature study. The methods used for Design Database System is DB-SDLC (System Development Life Cycle Database, namely: Conceptual Design, Design Logical and Physical Design. The result obtained is ERD (Entity Relationship Diagram that can be used in the development of Customs Reporting System in companies throughout Indonesia. In conclusions, ERD has been able to meet all the reporting elements of customs.

  10. Legal Services: Claims

    Science.gov (United States)

    1997-12-31

    waive such exemp- tions or privileges and direct release of the protected documents, upon balancing all pertinent factors, including finding that...injury causing death until expiration of decedent’s worklife ex- pectancy. When requested, the previous five years Federal income tax forms must be...knowing at all times how much of the CEA has been obligated, its remaining balance , and assessing each month whether the balance will cover claims

  11. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  12. Outputs and Growth of Primary Care Databases in the United Kingdom: Bibliometric Analysis

    Directory of Open Access Journals (Sweden)

    Zain Chaudhry

    2017-10-01

    Full Text Available Background: Electronic health database (EHD data is increasingly used by researchers. The major United Kingdom EHDs are the ‘Clinical Practice Research Datalink’ (CPRD, ‘The Health Improvement Network’ (THIN and ‘QResearch’. Over time, outputs from these databases have increased, but have not been evaluated. Objective: This study compares research outputs from CPRD, THIN and QResearch assessing growth and publication outputs over a 10-year period (2004-2013. CPRD was also reviewed separately over 20 years as a case study. Methods:  Publications from CPRD and QResearch were extracted using the Science Citation Index (SCI of the Thomson Scientific Institute for Scientific Information (Web of Science. THIN data was obtained from University College London and validated in Web of Science. All databases were analysed for growth in publications, the speciality areas and the journals in which their data have been published. Results: These databases collectively produced 1,296 publications over a ten-year period, with CPRD representing 63.6% (n=825 papers, THIN 30.4% (n=394 and QResearch 5.9% (n=77. Pharmacoepidemiology and General Medicine were the most common specialities featured. Over the 9-year period (2004-2013, publications for THIN and QResearch have slowly increased over time, whereas CPRD publications have increased substantially in last 4 years with almost 75% of CPRD publications published in the past 9 years. Conclusion: These databases are enhancing scientific research and are growing yearly, however display variability in their growth. They could become more powerful research tools if the National Health Service and general practitioners can provide accurate and comprehensive data for inclusion in these databases.

  13. Trends in the treatment changes and medication persistence of chronic myeloid leukemia in Taiwan from 1997 to 2007: a longitudinal population database analysis

    Directory of Open Access Journals (Sweden)

    Chang Chao-Sung

    2012-10-01

    Full Text Available Abstract Background Few studies have examined the longitudinal changes in the patterns, selection, and utilization of treatments for chronic myeloid leukemia (CML in routine clinical practice since the introduction of imatinib. Therefore, we investigated the trends in CML therapy, including changes, patterns, and persistence to imatinib therapy among patients with newly diagnosed CML. Methods We conducted a cross-sectional and longitudinal analysis of 11 years of claims data for patients with newly diagnosed CML included in the Taiwan National Health Insurance program. Pharmacy and diagnosis claims for newly diagnosed CML recorded between 1997 and 2007 year were extracted from the database. Annual overall use, new use of CML therapy, and persistence to imatinib therapy were estimated. The Anatomical Therapeutic Chemical codes for CML therapy [i.e., imatinib and conventional therapy: busulfan, hydroxyurea, interferon-α (IFNα, and cytarabine], and the process code for hematopoietic stem cell transplantation were used to categorize treatment patterns. Associations with patients characteristics were analyzed by multivariate logistic regression. Results Overall, the proportion of patients with newly diagnosed CML to all patients with CML increased by approximately 4-fold between 1998 and 2007. There were steady increases in the proportions of all treated patients and those starting therapy from 2003 to 2007. Fewer comorbid conditions and lower severity of CML were associated with treatment initiation. Medication persistence varied according to treatment duration, as 38.7% patients continued imatinib for ≥ 18 months without interruption but only 7.7% continued imatinib for ≥ 5 years. Factors associated with persistence to imatinib therapy were removal of the need for prior authorization for imatinib, and prior use of hydroxyurea and IFNα, whereas having undergone hematopoietic stem cell transplantation led to reduced likelihood

  14. [Bibliometric analysis of Revista Médica del IMSS in the Scopus database for the period between 2005-2013].

    Science.gov (United States)

    García-Gómez, Francisco; Ramírez-Méndez, Fernando

    2015-01-01

    To analyze the number of articles of Revista Médica del Instituto Mexicano del Seguro Social (Rev Med Inst Mex Seguro Soc) in the Scopus database and describe principal quantitative bibliometric indicators of scientific publications during the period between 2005 to 2013. Scopus database was used limited to the period between 2005 to 2013. The analysis cover mainly title of articles with the title of Revista Médica del Instituto Mexicano del Seguro Social and its possible modifications. For the analysis, Scopus, Excel and Access were used. 864 articles were published during the period between 2005 to 2013 in the Scopus database. We identified authors with the highest number of contributions including articles with the highest citation rate and forms of documents cited. We also divided articles by subjects, types of documents and other bibliometric indicators which characterize the publications. The use of Scopus brings the possibility of analyze with an external tool the visibility of the scientific production published in the Revista Médica del IMSS. The use of this database also contributes to identify the state of science in México, as well as in the developing countries.

  15. The TREAT-NMD DMD Global Database: Analysis of More than 7,000 Duchenne Muscular Dystrophy Mutations

    Science.gov (United States)

    Bladen, Catherine L; Salgado, David; Monges, Soledad; Foncuberta, Maria E; Kekou, Kyriaki; Kosma, Konstantina; Dawkins, Hugh; Lamont, Leanne; Roy, Anna J; Chamova, Teodora; Guergueltcheva, Velina; Chan, Sophelia; Korngut, Lawrence; Campbell, Craig; Dai, Yi; Wang, Jen; Barišić, Nina; Brabec, Petr; Lahdetie, Jaana; Walter, Maggie C; Schreiber-Katz, Olivia; Karcagi, Veronika; Garami, Marta; Viswanathan, Venkatarman; Bayat, Farhad; Buccella, Filippo; Kimura, En; Koeks, Zaïda; van den Bergen, Janneke C; Rodrigues, Miriam; Roxburgh, Richard; Lusakowska, Anna; Kostera-Pruszczyk, Anna; Zimowski, Janusz; Santos, Rosário; Neagu, Elena; Artemieva, Svetlana; Rasic, Vedrana Milic; Vojinovic, Dina; Posada, Manuel; Bloetzer, Clemens; Jeannet, Pierre-Yves; Joncourt, Franziska; Díaz-Manera, Jordi; Gallardo, Eduard; Karaduman, A Ayşe; Topaloğlu, Haluk; El Sherif, Rasha; Stringer, Angela; Shatillo, Andriy V; Martin, Ann S; Peay, Holly L; Bellgard, Matthew I; Kirschner, Jan; Flanigan, Kevin M; Straub, Volker; Bushby, Kate; Verschuuren, Jan; Aartsma-Rus, Annemieke; Béroud, Christophe; Lochmüller, Hanns

    2015-01-01

    Analyzing the type and frequency of patient-specific mutations that give rise to Duchenne muscular dystrophy (DMD) is an invaluable tool for diagnostics, basic scientific research, trial planning, and improved clinical care. Locus-specific databases allow for the collection, organization, storage, and analysis of genetic variants of disease. Here, we describe the development and analysis of the TREAT-NMD DMD Global database (http://umd.be/TREAT_DMD/). We analyzed genetic data for 7,149 DMD mutations held within the database. A total of 5,682 large mutations were observed (80% of total mutations), of which 4,894 (86%) were deletions (1 exon or larger) and 784 (14%) were duplications (1 exon or larger). There were 1,445 small mutations (smaller than 1 exon, 20% of all mutations), of which 358 (25%) were small deletions and 132 (9%) small insertions and 199 (14%) affected the splice sites. Point mutations totalled 756 (52% of small mutations) with 726 (50%) nonsense mutations and 30 (2%) missense mutations. Finally, 22 (0.3%) mid-intronic mutations were observed. In addition, mutations were identified within the database that would potentially benefit from novel genetic therapies for DMD including stop codon read-through therapies (10% of total mutations) and exon skipping therapy (80% of deletions and 55% of total mutations). PMID:25604253

  16. Possibility of Database Research as a Means of Pharmacovigilance in Japan Based on a Comparison with Sertraline Postmarketing Surveillance.

    Science.gov (United States)

    Hirano, Yoko; Asami, Yuko; Kuribayashi, Kazuhiko; Kitazaki, Shigeru; Yamamoto, Yuji; Fujimoto, Yoko

    2018-05-01

    Many pharmacoepidemiologic studies using large-scale databases have recently been utilized to evaluate the safety and effectiveness of drugs in Western countries. In Japan, however, conventional methodology has been applied to postmarketing surveillance (PMS) to collect safety and effectiveness information on new drugs to meet regulatory requirements. Conventional PMS entails enormous costs and resources despite being an uncontrolled observational study method. This study is aimed at examining the possibility of database research as a more efficient pharmacovigilance approach by comparing a health care claims database and PMS with regard to the characteristics and safety profiles of sertraline-prescribed patients. The characteristics of sertraline-prescribed patients recorded in a large-scale Japanese health insurance claims database developed by MinaCare Co. Ltd. were scanned and compared with the PMS results. We also explored the possibility of detecting signals indicative of adverse reactions based on the claims database by using sequence symmetry analysis. Diabetes mellitus, hyperlipidemia, and hyperthyroidism served as exploratory events, and their detection criteria for the claims database were reported by the Pharmaceuticals and Medical Devices Agency in Japan. Most of the characteristics of sertraline-prescribed patients in the claims database did not differ markedly from those in the PMS. There was no tendency for higher risks of the exploratory events after exposure to sertraline, and this was consistent with sertraline's known safety profile. Our results support the concept of using database research as a cost-effective pharmacovigilance tool that is free of selection bias . Further investigation using database research is required to confirm our preliminary observations. Copyright © 2018. Published by Elsevier Inc.

  17. Career transitions of inactive nurses: a registration database analysis (1993-2006).

    Science.gov (United States)

    Alameddine, Mohamad; Baumann, Andrea; Onate, Kanecy; Deber, Raisa

    2011-02-01

    One important strategy to address nursing shortages is to tap into the pool of licensed nurses who are not currently working in nursing and induce them to return to the nursing labour market. However, there is a paucity of research examining their likelihood of return to the active labour market. Analyze the career transitions of nurses registered with the College of Nurses Ontario but not working in the province's nursing labour market to determine the proportion of these nurses rejoining the active nursing workforce and examine the variation by inactive sub-category and age group. Quantitative analysis of a linked longitudinal database for all those registered with the College of Nurses of Ontario for the years 1993-2006. Registration records of all 215,687 nurses registered at any time in those years were merged by their unique registration number. Each nurse was placed for each year into an employment category. Two groups of nurses were defined: active (registered, working in nursing in Ontario) and inactive (registered, not working in nursing in Ontario). Inactive nurses were then sub-categorized into five mutually exclusive sub-categories: 'not working and seeking nursing employment', 'working in non-nursing and seeking nursing employment', 'not working and not seeking nursing employment', 'working in non-nursing and not seeking nursing employment' and 'working outside Ontario'. One-year career movements of nurses were tracked by generating 13 year-to-year transition matrixes. In the short-term, inactive nurses seeking a nursing job had the highest average rate of return to the active workforce (27.3-30.8%), though they might become high risk of leaving the profession if they do not find employment in a timely manner. Inactive nurses not seeking nursing employment are a heterogeneous group, and include nurses on leave who are likely to subsequently rejoin the active workforce should appropriate opportunities arise. The proportion of nurses rejoining the

  18. Histologic heterogeneity of triple negative breast cancer: A National Cancer Centre Database analysis.

    Science.gov (United States)

    Mills, Matthew N; Yang, George Q; Oliver, Daniel E; Liveringhouse, Casey L; Ahmed, Kamran A; Orman, Amber G; Laronga, Christine; Hoover, Susan J; Khakpour, Nazanin; Costa, Ricardo L B; Diaz, Roberto

    2018-06-02

    Triple negative breast cancer (TNBC) is an aggressive disease, but recent studies have identified heterogeneity in patient outcomes. However, the utility of histologic subtyping in TNBC has not yet been well-characterised. This study utilises data from the National Cancer Center Database (NCDB) to complete the largest series to date investigating the prognostic importance of histology within TNBC. A total of 729,920 patients (pts) with invasive ductal carcinoma (IDC), metaplastic breast carcinoma (MBC), medullary breast carcinoma (MedBC), adenoid cystic carcinoma (ACC), invasive lobular carcinoma (ILC) or apocrine breast carcinoma (ABC) treated between 2004 and 2012 were identified in the NCDB. Of these, 89,222 pts with TNBC that received surgery were analysed. Kaplan-Meier analysis, log-rank testing and multivariate Cox proportional hazards regression were utilised with overall survival (OS) as the primary outcome. MBC (74.1%), MedBC (60.6%), ACC (75.7%), ABC (50.1%) and ILC (1.8%) had significantly different proportions of triple negativity when compared to IDC (14.0%, p < 0.001). TNBC predicted an inferior OS in IDC (p < 0.001) and ILC (p < 0.001). Lumpectomy and radiation (RT) were more common in MedBC (51.7%) and ACC (51.5%) and less common in MBC (33.1%) and ILC (25.4%), when compared to IDC (42.5%, p < 0.001). TNBC patients with MBC (HR 1.39, p < 0.001), MedBC (HR 0.42, p < 0.001) and ACC (HR 0.32, p = 0.003) differed significantly in OS when compared to IDC. Our results indicate that histologic heterogeneity in TNBC significantly informs patient outcomes and thus, has the potential to aid in the development of optimum personalised treatments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. IDGAM. A PC code and database to help nuclide identification in activation analysis

    International Nuclear Information System (INIS)

    Paviotti Corcuera, R.; Moraes Cunha, M. de; Jayanthi, K.A.

    1994-01-01

    The document describes a PC diskette containing a code and database which helps researchers to identify the nuclides in a radioactive sample. Data can be retrieved by gamma-ray energy, nuclide or element. The PC diskette is available, costfree, from the IAEA Nuclear Data Section, upon request. (author). 6 refs, 5 figs

  20. How to deal properly with a natural catastrophe databaseanalysis of flood losses

    Directory of Open Access Journals (Sweden)

    W. Kron

    2012-03-01

    Full Text Available Global reinsurer Munich Re has been collecting data on losses from natural disasters for almost four decades. Together with EM-Dat and sigma, Munich Re's NatCatSERVICE database is currently one of three global databases of its kind, with its more than 30 000 datasets. Although the database was originally designed for reinsurance business purposes, it contains a host of additional information on catastrophic events. Data collection poses difficulties such as not knowing the exact extent of human and material losses, biased reporting by interest groups, including governments, changes over time due to new findings, etc. Loss quantities are often not separable into different causes, e.g., windstorm and flood losses during a hurricane, or windstorm, hail and flooding during a severe storm event. These difficulties should be kept in mind when database figures are analysed statistically, and the results have to be treated with due regard for the characteristics of the underlying data. Comparing events at different locations and on different dates can only be done using normalised data. For most analyses, and in particular trend analyses, socio-economic changes such as inflation or growth in population and values must be considered. Problems encountered when analysing trends are discussed using the example of floods and flood losses.

  1. The ALFAM2 database on ammonia emission from field-applied manure: Description and illustrative analysis

    DEFF Research Database (Denmark)

    Hafner, Sasha D.; Pacholski, Andreas; Bittman, Shabtai

    2018-01-01

    . Data on five manure types (cattle, pig, mink, poultry, mixed, as well as sludge and “other”) applied to three types of crops (grass, small grains, maize, as well as stubble and bare soil) are included. Application methods represented in the database include broadcast, trailing hose, trailing shoe...

  2. Growth hormone treatment in aarskog syndrome: analysis of the KIGS (Pharmacia International Growth Database) data.

    NARCIS (Netherlands)

    Darendeliler, F.; Larsson, P.; Neyzi, O.; Price, A.D.; Hagenas, L.; Sipila, I.; Lindgren, A.; Otten, B.J.; Bakker, B.

    2003-01-01

    Aarskog syndrome is an X-linked disorder characterized by faciogenital dysplasia and short stature. The present study set out to determine the effect of growth hormone (GH) therapy in patients with Aarskog syndrome enrolled in KIGS--the Pharmacia International Growth Database. Twenty-one patients

  3. How to deal properly with a natural catastrophe database - analysis of flood losses

    Science.gov (United States)

    Kron, W.; Steuer, M.; Löw, P.; Wirtz, A.

    2012-03-01

    Global reinsurer Munich Re has been collecting data on losses from natural disasters for almost four decades. Together with EM-Dat and sigma, Munich Re's NatCatSERVICE database is currently one of three global databases of its kind, with its more than 30 000 datasets. Although the database was originally designed for reinsurance business purposes, it contains a host of additional information on catastrophic events. Data collection poses difficulties such as not knowing the exact extent of human and material losses, biased reporting by interest groups, including governments, changes over time due to new findings, etc. Loss quantities are often not separable into different causes, e.g., windstorm and flood losses during a hurricane, or windstorm, hail and flooding during a severe storm event. These difficulties should be kept in mind when database figures are analysed statistically, and the results have to be treated with due regard for the characteristics of the underlying data. Comparing events at different locations and on different dates can only be done using normalised data. For most analyses, and in particular trend analyses, socio-economic changes such as inflation or growth in population and values must be considered. Problems encountered when analysing trends are discussed using the example of floods and flood losses.

  4. Analysis and Development of a Web-Enabled Planning and Scheduling Database Application

    Science.gov (United States)

    2013-09-01

    will be fully functional of the Macintosh Operating System . This is the platform of the original database and the platform of the testing system ...was to explore available scheduling tools operational on the Macintosh Operating System with 78 the smallest practical price tag. The solution was...48 F. PROPOSED DBMS ENVIRONMENT .........................49 1. Operating System (OS) ........................49 2

  5. High-throughput STR analysis for DNA database using direct PCR.

    Science.gov (United States)

    Sim, Jeong Eun; Park, Su Jeong; Lee, Han Chul; Kim, Se-Yong; Kim, Jong Yeol; Lee, Seung Hwan

    2013-07-01

    Since the Korean criminal DNA database was launched in 2010, we have focused on establishing an automated DNA database profiling system that analyzes short tandem repeat loci in a high-throughput and cost-effective manner. We established a DNA database profiling system without DNA purification using a direct PCR buffer system. The quality of direct PCR procedures was compared with that of conventional PCR system under their respective optimized conditions. The results revealed not only perfect concordance but also an excellent PCR success rate, good electropherogram quality, and an optimal intra/inter-loci peak height ratio. In particular, the proportion of DNA extraction required due to direct PCR failure could be minimized to <3%. In conclusion, the newly developed direct PCR system can be adopted for automated DNA database profiling systems to replace or supplement conventional PCR system in a time- and cost-saving manner. © 2013 American Academy of Forensic Sciences Published 2013. This article is a U.S. Government work and is in the public domain in the U.S.A.

  6. Identification of fire modeling issues based on an analysis of real events from the OECD FIRE database

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, Dominik [Swiss Federal Nuclear Safety Inspectorate ENSI, Brugg (Switzerland)

    2017-03-15

    Precursor analysis is widely used in the nuclear industry to judge the significance of events relevant to safety. However, in case of events that may damage equipment through effects that are not ordinary functional dependencies, the analysis may not always fully appreciate the potential for further evolution of the event. For fires, which are one class of such events, this paper discusses modelling challenges that need to be overcome when performing a probabilistic precursor analysis. The events used to analyze are selected from the Organisation for Economic Cooperation and Development (OECD) Fire Incidents Records Exchange (FIRE) Database.

  7. Sting_RDB: a relational database of structural parameters for protein analysis with support for data warehousing and data mining.

    Science.gov (United States)

    Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G

    2007-10-05

    An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.

  8. An integrative clinical database and diagnostics platform for biomarker identification and analysis in ion mobility spectra of human exhaled air

    DEFF Research Database (Denmark)

    Schneider, Till; Hauschild, Anne-Christin; Baumbach, Jörg Ingo

    2013-01-01

    data integration and semi-automated data analysis, in particular with regard to the rapid data accumulation, emerging from the high-throughput nature of the MCC/IMS technology. Here, we present a comprehensive database application and analysis platform, which combines metabolic maps with heterogeneous...... biomedical data in a well-structured manner. The design of the database is based on a hybrid of the entity-attribute-value (EAV) model and the EAV-CR, which incorporates the concepts of classes and relationships. Additionally it offers an intuitive user interface that provides easy and quick access...... to have a clear understanding of the detailed composition of human breath. Therefore, in addition to the clinical studies, there is a need for a flexible and comprehensive centralized data repository, which is capable of gathering all kinds of related information. Moreover, there is a demand for automated...

  9. Informational database methodology for urban risk analysis.Case study: the historic centre of Bucharest

    Science.gov (United States)

    Armas, I.; Dumitrascu, S.

    2009-04-01

    , but is also a very populated area, this being factors that favour a high susceptibility level. In addition, the majority of the buildings are included in the first and second categories of seismic risk, being built between 1875 and 1940, the age of the buildings establishing an increased vulnerability to natural hazards. The methodology was developed through the contribution of three partner universities from Bucharest: the University of Bucharest, the Academy for Economic Studies and the Technical University of Constructions. The method suggested was based on the analysis and processing of digital and statistical spatial information resulted from 1:500 topographical plans, satellite pictures, archives and historical maps used for the identification of the age of the buildings. Also, an important stage was represented by the field investigations that resulted with the data used in the assessment of the buildings: year of construction, location and vicinity, height, number of floors, state and function of the building, equipment and construction type. The information collected from the field together with the data resulted from the digitization of the ortophotoplans were inserted in ArcGIS in order to compile the database. Furthermore, the team from the Cybernetics Faculty developed a special software package in Visual Studio and SQL server in order to insert the sheets in GIS so that they could be statistically processed. The final product of the study is a program that includes as main functions editing, the analysis based on selected factors (individual or group) and viewing of building information in the shape of maps or 3D visualization. The strengths of the informational system resulted are given by the extended range of applicability, the short processing period, accessibility, capacity of support for a large amount of information and, thus, standing out as an adequate instrument to fit the needs of a susceptible population.

  10. Development of a database for prompt γ-ray neutron activation analysis: Summary report of the third research coordination meeting

    International Nuclear Information System (INIS)

    Lindstrom, Richard M.; Firestone, Richard B.; Paviotti-Corcuera, R.

    2003-01-01

    The main discussions and conclusions from the Third Co-ordination Meeting on the Development of a Database for Prompt Gamma-ray Neutron Activation Analysis are summarized in this report. All results were reviewed in detail, and the final version of the TECDOC and the corresponding software were agreed upon and approved for preparation. Actions were formulated with the aim of completing the final version of the TECDOC and associated software by May 2003

  11. Development of a database for prompt γ-ray neutron activation analysis. Summary report of the third research coordination meeting

    International Nuclear Information System (INIS)

    Lindstorm, Richard M.; Firestone, Richard B.; Paviotti-Corcuera, R.

    2003-04-01

    The main discussions and conclusions from the Third Co-ordination Meeting on the Development of a Database for Prompt γ-ray Neutron Activation Analysis are summarised in this report. All results were reviewed in detail, and the final version of the TECDOC and the corresponding software were agreed upon and approved for preparation. Actions were formulated with the aim of completing the final version of the TECDOC and associated software by May 2003. (author)

  12. MBGD update 2015: microbial genome database for flexible ortholog analysis utilizing a diverse set of genomic data.

    Science.gov (United States)

    Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu

    2015-01-01

    The microbial genome database for comparative analysis (MBGD) (available at http://mbgd.genome.ad.jp/) is a comprehensive ortholog database for flexible comparative analysis of microbial genomes, where the users are allowed to create an ortholog table among any specified set of organisms. Because of the rapid increase in microbial genome data owing to the next-generation sequencing technology, it becomes increasingly challenging to maintain high-quality orthology relationships while allowing the users to incorporate the latest genomic data available into an analysis. Because many of the recently accumulating genomic data are draft genome sequences for which some complete genome sequences of the same or closely related species are available, MBGD now stores draft genome data and allows the users to incorporate them into a user-specific ortholog database using the MyMBGD functionality. In this function, draft genome data are incorporated into an existing ortholog table created only from the complete genome data in an incremental manner to prevent low-quality draft data from affecting clustering results. In addition, to provide high-quality orthology relationships, the standard ortholog table containing all the representative genomes, which is first created by the rapid classification program DomClust, is now refined using DomRefine, a recently developed program for improving domain-level clustering using multiple sequence alignment information. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Metadata database and data analysis software for the ground-based upper atmospheric data developed by the IUGONET project

    Science.gov (United States)

    Hayashi, H.; Tanaka, Y.; Hori, T.; Koyama, Y.; Shinbori, A.; Abe, S.; Kagitani, M.; Kouno, T.; Yoshida, D.; Ueno, S.; Kaneda, N.; Yoneda, M.; Tadokoro, H.; Motoba, T.; Umemura, N.; Iugonet Project Team

    2011-12-01

    The Inter-university Upper atmosphere Global Observation NETwork (IUGONET) is a Japanese inter-university project by the National Institute of Polar Research (NIPR), Tohoku University, Nagoya University, Kyoto University, and Kyushu University to build a database of metadata for ground-based observations of the upper atmosphere. The IUGONET institutes/universities have been collecting various types of data by radars, magnetometers, photometers, radio telescopes, helioscopes, etc. at various locations all over the world and at various altitude layers from the Earth's surface to the Sun. The metadata database will be of great help to researchers in efficiently finding and obtaining these observational data spread over the institutes/universities. This should also facilitate synthetic analysis of multi-disciplinary data, which will lead to new types of research in the upper atmosphere. The project has also been developing a software to help researchers download, visualize, and analyze the data provided from the IUGONET institutes/universities. The metadata database system is built on the platform of DSpace, which is an open source software for digital repositories. The data analysis software is written in the IDL language with the TDAS (THEMIS Data Analysis Software suite) library. These products have been just released for beta-testing.

  14. Identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis by using the Delphi Technique

    Science.gov (United States)

    Halim, N. Z. A.; Sulaiman, S. A.; Talib, K.; Ng, E. G.

    2018-02-01

    This paper explains the process carried out in identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis. The research was initially a part of a larger research exercise to identify the significance of NDCDB from the legal, technical, role and land-based analysis perspectives. The research methodology of applying the Delphi technique is substantially discussed in this paper. A heterogeneous panel of 14 experts was created to determine the importance of NDCDB from the technical relevance standpoint. Three statements describing the relevant features of NDCDB for spatial analysis were established after three rounds of consensus building. It highlighted the NDCDB’s characteristics such as its spatial accuracy, functions, and criteria as a facilitating tool for spatial analysis. By recognising the relevant features of NDCDB for spatial analysis in this study, practical application of NDCDB for various analysis and purpose can be widely implemented.

  15. Analysis of prescription database extracted from standard textbooks of traditional Dai medicine

    Directory of Open Access Journals (Sweden)

    Zhang Chuang

    2012-08-01

    Full Text Available Abstract Background Traditional Dai Medicine (TDM is one of the four major ethnomedicine of China. In 2007 a group of experts produced a set of seven Dai medical textbooks on this subject. The first two were selected as the main data source to analyse well recognized prescriptions. Objective To quantify patterns of prescriptions, common ingredients, indications and usages of TDM. Methods A relational database linking the prescriptions, ingredients, herb names, indications, and usages was set up. Frequency of pattern of combination and common ingredients were tabulated. Results A total of 200 prescriptions and 402 herbs were compiled. Prescriptions based on "wind" disorders, a detoxification theory that most commonly deals with symptoms of digestive system diseases, accounted for over one third of all prescriptions. The major methods of preparations mostly used roots and whole herbs. Conclusion The information extracted from the relational database may be useful for understanding symptomatic treatments. Antidote and detoxification theory deserves further research.

  16. Analysis of prescription database extracted from standard textbooks of traditional Dai medicine.

    Science.gov (United States)

    Zhang, Chuang; Chongsuvivatwong, Virasakdi; Keawpradub, Niwat; Lin, Yanfang

    2012-08-29

    Traditional Dai Medicine (TDM) is one of the four major ethnomedicine of China. In 2007 a group of experts produced a set of seven Dai medical textbooks on this subject. The first two were selected as the main data source to analyse well recognized prescriptions. To quantify patterns of prescriptions, common ingredients, indications and usages of TDM. A relational database linking the prescriptions, ingredients, herb names, indications, and usages was set up. Frequency of pattern of combination and common ingredients were tabulated. A total of 200 prescriptions and 402 herbs were compiled. Prescriptions based on "wind" disorders, a detoxification theory that most commonly deals with symptoms of digestive system diseases, accounted for over one third of all prescriptions. The major methods of preparations mostly used roots and whole herbs. The information extracted from the relational database may be useful for understanding symptomatic treatments. Antidote and detoxification theory deserves further research.

  17. Development of database for the divertor recycling in JT-60U and its analysis

    Energy Technology Data Exchange (ETDEWEB)

    Takizuka, Tomonori; Shimizu, Katsuhiro; Hayashi, Nobuhiko; Asakura, Nobuyuki [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Arakawa, Kazuya [Komatsu, Ltd., Tokyo (Japan)

    2003-05-01

    We have developed a database for the divertor recycling in JT-60U plasmas. This database makes it possible to investigate behaviors of the neutral-particle flux in plasmas and the ion flux to divertor plates under a condition for core-plasma parameters, such as electron density and heating power. The correlation between the electron density and the heating power is not strong in this database, and parameter scans for the density and the power in wide ranges are realized. On the basis of this database, we have analyzed the ion flux to divertor plates. The divertor-plate ion flux amplified by the recycling grows nonlinearly with the increase of the electron density n{sub e}. Its averaged dependence is a linear growth ({approx}n{sub e}{sup 1.0}) at the low density, and becomes a nonlinear growth ({approx}n{sub e}{sup 1.5}) at the high density. The spread of dependence from the averaged one is very large. This spread is caused mainly by complex physical characteristics of divertor plasmas, though it is little dependent on the heating power. The behavior of ion flux depends strongly on divertor configurations and divertor-plate/first-wall conditions. It is confirmed that the bifurcated transition takes place from the low-recycling divertor plasma at the low density to the high-recycling divertor plasma at the high density. The density at the transition is nearly proportional to the 1/4 power of the heating power. (author)

  18. Clinical characteristics and outcomes of myxedema coma: Analysis of a national inpatient database in Japan

    OpenAIRE

    Ono, Yosuke; Ono, Sachiko; Yasunaga, Hideo; Matsui, Hiroki; Fushimi, Kiyohide; Tanaka, Yuji

    2017-01-01

    Background: Myxedema coma is a life-threatening and emergency presentation of hypothyroidism. However, the clinical features and outcomes of this condition have been poorly defined because of its rarity. Methods: We conducted a retrospective observational study of patients diagnosed with myxedema coma from July 2010 through March 2013 using a national inpatient database in Japan. We investigated characteristics, comorbidities, treatments, and in-hospital mortality of patients with myxedem...

  19. Research Outputs of England's Hospital Episode Statistics (HES) Database: Bibliometric Analysis.

    Science.gov (United States)

    Chaudhry, Zain; Mannan, Fahmida; Gibson-White, Angela; Syed, Usama; Ahmed, Shirin; Majeed, Azeem

    2017-12-06

    Hospital administrative data, such as those provided by the Hospital Episode Statistics (HES) database in England, are increasingly being used for research and quality improvement. To date, no study has tried to quantify and examine trends in the use of HES for research purposes. To examine trends in the use of HES data for research. Publications generated from the use of HES data were extracted from PubMed and analysed. Publications from 1996 to 2014 were then examined further in the Science Citation Index (SCI) of the Thompson Scientific Institute for Science Information (Web of Science) for details of research specialty area. 520 studies, categorised into 44 specialty areas, were extracted from PubMed. The review showed an increase in publications over the 18-year period with an average of 27 publications per year, however with the majority of output observed in the latter part of the study period. The highest number of publications was in the Health Statistics specialty area. The use of HES data for research is becoming more common. Increase in publications over time shows that researchers are beginning to take advantage of the potential of HES data. Although HES is a valuable database, concerns exist over the accuracy and completeness of the data entered. Clinicians need to be more engaged with HES for the full potential of this database to be harnessed.

  20. Clinical and economic benefits of professional CGM among people with type 2 diabetes in the United States: analysis of claims and lab data.

    Science.gov (United States)

    Sierra, Joseph A; Shah, Mona; Gill, Max S; Flores, Zachery; Chawla, Hiten; Kaufman, Francine R; Vigersky, Robert

    2018-03-01

    It is estimated that one in 10 people in the US have a diagnosis of diabetes. Type 2 diabetes accounts for 95% of all cases in the US, with annual costs estimated to be $246 billion per year. This study investigated the impact of a glucose-measuring intervention to the burden of type 2 diabetes. This analysis seeks to understand how professional continuous glucose monitoring (professional CGM) impacts clinical and economic outcomes when compared to patients who are not prescribed professional CGM. This study utilized a large healthcare claims and lab dataset from the US, and identified a cohort of patients who were prescribed professional CGM as identified by CPT codes 95250 and 95251. It calculated economic and clinical outcomes 1 year before and 1 year after the use of professional CGM, using a generalized linear model. Patients who utilized professional CGM saw an improvement in hemoglobin A1C. The "difference-in-difference" calculation for A1C was shown to be -0.44%. There was no statistically significant difference in growth of total annual costs for people who used professional CGM compared to those who did not ($1,270, p = .08). Patients using professional CGM more than once per year had a -$3,376 difference in the growth of total costs (p = .05). Patients who used professional CGM while changing their diabetes treatment regimen also had a difference of -$3,327 in growth of total costs (p = .0023). Significant clinical benefits were observed for patients who used professional CGM. Economic benefits were observed for patients who utilized professional CGM more than once within a 1-year period or who used it during a change of diabetes therapy. This suggests that professional CGM may help decrease rising trends in healthcare costs for people with type 2 diabetes, while also improving clinical outcomes.

  1. Regional Variation of Cost of Care in the Last 12 Months of Life in Switzerland: Small-area Analysis Using Insurance Claims Data.

    Science.gov (United States)

    Panczak, Radoslaw; Luta, Xhyljeta; Maessen, Maud; Stuck, Andreas E; Berlin, Claudia; Schmidlin, Kurt; Reich, Oliver; von Wyl, Viktor; Goodman, David C; Egger, Matthias; Zwahlen, Marcel; Clough-Gorr, Kerri M

    2017-02-01

    Health care spending increases sharply at the end of life. Little is known about variation of cost of end of life care between regions and the drivers of such variation. We studied small-area patterns of cost of care in the last year of life in Switzerland. We used mandatory health insurance claims data of individuals who died between 2008 and 2010 to derive cost of care. We used multilevel regression models to estimate differences in costs across 564 regions of place of residence, nested within 71 hospital service areas. We examined to what extent variation was explained by characteristics of individuals and regions, including measures of health care supply. The study population consisted of 113,277 individuals. The mean cost of care during last year of life was 32.5k (thousand) Swiss Francs per person (SD=33.2k). Cost differed substantially between regions after adjustment for patient age, sex, and cause of death. Variance was reduced by 52%-95% when we added individual and regional characteristics, with a strong effect of language region. Measures of supply of care did not show associations with costs. Remaining between and within hospital service area variations were most pronounced for older females and least for younger individuals. In Switzerland, small-area analysis revealed variation of cost of care during the last year of life according to linguistic regions and unexplained regional differences for older women. Cultural factors contribute to the delivery and utilization of health care during the last months of life and should be considered by policy makers.

  2. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  3. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  4. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  5. Bundled payment reimbursement for anterior and posterior approaches for cervical spondylotic myelopathy: an analysis of private payer and Medicare databases.

    Science.gov (United States)

    Virk, Sohrab S; Phillips, Frank M; Khan, Safdar N

    2018-03-01

    OBJECTIVE Cervical spondylotic myelopathy (CSM) is a progressive spinal condition that often requires surgery. Studies have shown the clinical equivalency of anterior versus posterior approaches for CSM surgery. The purpose of this study was to determine the amount and type of resources used for anterior and posterior surgical treatment of CSM by using large national databases of clinical and financial information from patients. METHODS This study consists of 2 large cohorts of patients who underwent either an anterior or posterior approach for treatment of CSM. These patients were selected from the Medicare 5% National Sample Administrative Database (SAF5) and the Humana orthopedic database (HORTHO), which is a database of patients with private payer health insurance. The outcome measures were the cost of a 90-day episode of care, as well as a breakdown of the cost components for each surgical procedure between 2005 and 2014. RESULTS A total of 16,444 patients were included in this analysis. In HORTHO, there were 10,332 and 1556 patients treated with an anterior or posterior approach for CSM, respectively. In SAF5, there were 3851 and 705 patients who were treated by an anterior or posterior approach for CSM, respectively. The mean ± SD reimbursements for anterior and posterior approaches in the HORTHO database were $20,863 ± $2014 and $23,813 ± $4258, respectively (p = 0.048). The mean ± SD reimbursements for anterior and posterior approaches in the SAF5 database were $18,219 ± $1053 and $25,598 ± $1686, respectively (p reimbursements for a rehabilitation/skilled nursing facility and hospital/inpatient care for patients who underwent a posterior approach in both the private payer and Medicare databases. In all cohorts in this study, the hospital-related reimbursement was more than double the surgeon-related reimbursement. CONCLUSIONS This study provides resource utilization information for a 90-day episode of care for both anterior and posterior approaches

  6. Data management and data analysis techniques in pharmacoepidemiological studies using a pre-planned multi-database approach

    DEFF Research Database (Denmark)

    Bazelier, Marloes T.; Eriksson, Irene; de Vries, Frank

    2015-01-01

    pharmacoepidemiological multi-database studies published from 2007 onwards that combined data for a pre-planned common analysis or quantitative synthesis. Information was retrieved about study characteristics, methods used for individual-level analyses and meta-analyses, data management and motivations for performing...... meta-analysis (27%), while a semi-aggregate approach was applied in three studies (14%). Information on central programming or heterogeneity assessment was missing in approximately half of the publications. Most studies were motivated by improving power (86%). CONCLUSIONS: Pharmacoepidemiological multi...

  7. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  8. A new database sub-system for grain-size analysis

    Science.gov (United States)

    Suckow, Axel

    2013-04-01

    Detailed grain-size analyses of large depth profiles for palaeoclimate studies create large amounts of data. For instance (Novothny et al., 2011) presented a depth profile of grain-size analyses with 2 cm resolution and a total depth of more than 15 m, where each sample was measured with 5 repetitions on a Beckman Coulter LS13320 with 116 channels. This adds up to a total of more than four million numbers. Such amounts of data are not easily post-processed by spreadsheets or standard software; also MS Access databases would face serious performance problems. The poster describes a database sub-system dedicated to grain-size analyses. It expands the LabData database and laboratory management system published by Suckow and Dumke (2001). This compatibility with a very flexible database system provides ease to import the grain-size data, as well as the overall infrastructure of also storing geographic context and the ability to organize content like comprising several samples into one set or project. It also allows easy export and direct plot generation of final data in MS Excel. The sub-system allows automated import of raw data from the Beckman Coulter LS13320 Laser Diffraction Particle Size Analyzer. During post processing MS Excel is used as a data display, but no number crunching is implemented in Excel. Raw grain size spectra can be exported and controlled as Number- Surface- and Volume-fractions, while single spectra can be locked for further post-processing. From the spectra the usual statistical values (i.e. mean, median) can be computed as well as fractions larger than a grain size, smaller than a grain size, fractions between any two grain sizes or any ratio of such values. These deduced values can be easily exported into Excel for one or more depth profiles. However, such a reprocessing for large amounts of data also allows new display possibilities: normally depth profiles of grain-size data are displayed only with summarized parameters like the clay

  9. The drug-minded protein interaction database (DrumPID) for efficient target analysis and drug development.

    Science.gov (United States)

    Kunz, Meik; Liang, Chunguang; Nilla, Santosh; Cecil, Alexander; Dandekar, Thomas

    2016-01-01

    The drug-minded protein interaction database (DrumPID) has been designed to provide fast, tailored information on drugs and their protein networks including indications, protein targets and side-targets. Starting queries include compound, target and protein interactions and organism-specific protein families. Furthermore, drug name, chemical structures and their SMILES notation, affected proteins (potential drug targets), organisms as well as diseases can be queried including various combinations and refinement of searches. Drugs and protein interactions are analyzed in detail with reference to protein structures and catalytic domains, related compound structures as well as potential targets in other organisms. DrumPID considers drug functionality, compound similarity, target structure, interactome analysis and organismic range for a compound, useful for drug development, predicting drug side-effects and structure-activity relationships.Database URL:http://drumpid.bioapps.biozentrum.uni-wuerzburg.de. © The Author(s) 2016. Published by Oxford University Press.

  10. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    Science.gov (United States)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  11. Management of radiological related equipments. Creating the equipment management database and analysis of the repair and maintenance records

    International Nuclear Information System (INIS)

    Eguchi, Megumu; Taguchi, Keiichi; Oota, Takashi; Kajiwara, Hiroki; Ono, Kiyotune; Hagio, Kiyofumi; Uesugi, Ekizo; Kajishima, Tetuo; Ueda, Kenji

    2002-01-01

    In 1997, we established the committee of equipments maintenance and management in our department. We designed the database in order to classify and register all the radiological related equipments using Microsoft Access. The management of conditions and cost of each equipment has become easier, by keeping and recording the database in the equipments management ledger and by filing the history of repairs or maintenances occurred to modalities. We then accounted numbers, cost of repairs and downtimes from the data of the repair and maintenance records for four years, and we reexamined the causal analysis of failures and the contents of the regular maintenance for CT and MRI equipments that had shown the higher numbers of repairs. Consequently, we have found the improvement of registration method of the data and the more economical way to use of the cost of repair. (author)

  12. Analysis of Surgical Pathology Data in the HIRA Database: Emphasis on Current Status and Endoscopic Submucosal Dissection Specimens

    Directory of Open Access Journals (Sweden)

    Sun-ju Byeon

    2016-05-01

    Full Text Available Background: In Korea, medical institutions make claims for insurance reimbursement to the Health Insurance Review and Assessment Service (HIRA. Thus, HIRA databases reflect the general medical services that are provided in Korea. We conducted two pathology-related studies using a HIRA national patient sample (NPS data (selection probability, 0.03. First, we evaluated the current status of general pathologic examination in Korea. Second, we evaluated pathologic issues associated with endoscopic submucosal dissection (ESD. Methods: The sample data used in this study was HIRA-NPS-2013-0094. Results: In the NPS dataset, 163,372 pathologic examinations were performed in 103,528 patients during the year 2013. Considering sampling weight (33.3, it is estimated that 5,440,288 (163,372 × 33.3 pathologic examinations were performed. Internal medicine and general surgery were the most common departments requesting pathologic examinations. The region performing pathologic examinations were different according to type of medical institution. In total, 490 patients underwent ESD, and 43.4% (213/490 underwent ESD due to gastric carcinoma. The results of the ESD led to a change in disease code for 10.5% (29/277 of non-gastric carcinoma patients. In addition, 21 patients (4.3% underwent surgery following the ESD. The average period between ESD and surgery was 44 days. Conclusions: HIRA sample data provide the nation-wide landscape of specific procedure. However, in order to reduce the statistical error, further studies using entire HIRA data are needed.

  13. FoodMicrobionet: A database for the visualisation and exploration of food bacterial communities based on network analysis.

    Science.gov (United States)

    Parente, Eugenio; Cocolin, Luca; De Filippis, Francesca; Zotta, Teresa; Ferrocino, Ilario; O'Sullivan, Orla; Neviani, Erasmo; De Angelis, Maria; Cotter, Paul D; Ercolini, Danilo

    2016-02-16

    Amplicon targeted high-throughput sequencing has become a popular tool for the culture-independent analysis of microbial communities. Although the data obtained with this approach are portable and the number of sequences available in public databases is increasing, no tool has been developed yet for the analysis and presentation of data obtained in different studies. This work describes an approach for the development of a database for the rapid exploration and analysis of data on food microbial communities. Data from seventeen studies investigating the structure of bacterial communities in dairy, meat, sourdough and fermented vegetable products, obtained by 16S rRNA gene targeted high-throughput sequencing, were collated and analysed using Gephi, a network analysis software. The resulting database, which we named FoodMicrobionet, was used to analyse nodes and network properties and to build an interactive web-based visualisation. The latter allows the visual exploration of the relationships between Operational Taxonomic Units (OTUs) and samples and the identification of core- and sample-specific bacterial communities. It also provides additional search tools and hyperlinks for the rapid selection of food groups and OTUs and for rapid access to external resources (NCBI taxonomy, digital versions of the original articles). Microbial interaction network analysis was carried out using CoNet on datasets extracted from FoodMicrobionet: the complexity of interaction networks was much lower than that found for other bacterial communities (human microbiome, soil and other environments). This may reflect both a bias in the dataset (which was dominated by fermented foods and starter cultures) and the lower complexity of food bacterial communities. Although some technical challenges exist, and are discussed here, the net result is a valuable tool for the exploration of food bacterial communities by the scientific community and food industry. Copyright © 2015. Published by

  14. Health technology management: a database analysis as support of technology managers in hospitals.

    Science.gov (United States)

    Miniati, Roberto; Dori, Fabrizio; Iadanza, Ernesto; Fregonara, Mario M; Gentili, Guido Biffi

    2011-01-01

    Technology management in healthcare must continually respond and adapt itself to new improvements in medical equipment. Multidisciplinary approaches which consider the interaction of different technologies, their use and user skills, are necessary in order to improve safety and quality. An easy and sustainable methodology is vital to Clinical Engineering (CE) services in healthcare organizations in order to define criteria regarding technology acquisition and replacement. This article underlines the critical aspects of technology management in hospitals by providing appropriate indicators for benchmarking CE services exclusively referring to the maintenance database from the CE department at the Careggi Hospital in Florence, Italy.

  15. A New Breed of Database System: Volcano Global Risk Identification and Analysis Project (VOGRIPA)

    Science.gov (United States)

    Crosweller, H. S.; Sparks, R. S.; Siebert, L.

    2009-12-01

    VOGRIPA originated as part of the Global Risk Identification Programme (GRIP) that is being co-ordinated from the Earth Institute of Columbia University under the auspices of the United Nations and World Bank. GRIP is a five-year programme aiming at improving global knowledge about risk from natural hazards and is part of the international response to the catastrophic 2004 Asian tsunami. VOGRIPA is also a formal IAVCEI project. The objectives of VOGRIPA are to create a global database of volcanic activity, hazards and vulnerability information that can be analysed to identify locations at high risk from volcanism, gaps in knowledge about hazards and risk, and will allow scientists and disaster managers at specific locations to analyse risk within a global context of systematic information. It is this added scope of risk and vulnerability as well as hazard which sets VOGRIPA apart from most previous databases. The University of Bristol is the central coordinating centre for the project, which is an international partnership including the Smithsonian Institution, the Geological Survey of Japan, the Earth Observatory of Singapore (Chris Newhall), the British Geological Survey, the University of Buffalo (SUNY) and Munich Re. The partnership is intended to grow and any individuals or institutions who are able to contribute resources to VOGRIPA objectives are welcome to participate. Work has already begun (funded principally by Munich Re) on populating a database of large magnitude explosive eruptions reaching back to the Quaternary, with extreme-value statistics being used to evaluate the magnitude-frequency relationship of such events, and also an assessment of how the quality of records affect the results. The following 4 years of funding from the European Research Council for VOGRIPA will be used to establish further international collaborations in orde