Steeg, L. van de; Langelaan, M.; Wagner, C.
Objective: To develop and validate a predictive model for preventable adverse events (AEs) in hospitalized older patients, using clinically important risk factors that are readily available on admission. Design: Data from two retrospective patient record review studies on AEs were used. Risk factors
Çanga, Aytun; Durakoğlugil, Murtaza Emre; Erdoğan, Turan; Kirbaş, Aynur; Yilmaz, Adnan; Çiçek, Yüksel; Ergül, Elif; Çetin, Mustafa; Kocaman, Sinan Altan
The aim of our study was to investigate whether osteoprotegerin (OPG) is related to in-hospital major adverse cardiac events (MACE) and reperfusion parameters in patients with ST elevation myocardial infarction (STEMI). The OPG/receptor activator of nuclear factor-κB (RANK)/RANK ligand pathway has recently been associated with atherosclerosis. OPG is a predictor of cardiovascular events in patients with acute coronary syndrome. This study included 96 consecutive patients with STEMI undergoing primary percutaneous coronary intervention (PCI). Two groups with equal number of patients were formed according to median OPG level. The association of OPG levels on admission with post-procedural reperfusion parameters, and in-hospital MACE were investigated. Patients with higher OPG levels displayed higher neutrophil/lymphocyte ratio, admission troponin, admission glucose, and high-sensitive C-reactive protein. Higher OPG levels were associated with increased thrombolysis in myocardial infarction (TIMI) risk score, TIMI risk index, pain to balloon time, need for inotropic support, shock, and MACE, mainly driven by death. Reperfusion parameters were not different between the two groups. TIMI risk score, TIMI risk index, myocardial blush grade, estimated glomerular filtration rate (eGFR), number of obstructed vessels, and OPG significantly predicted adverse cardiac events. Multiple logistic regression analysis revealed OPG as an independent predictor of MACE as well as eGFR, number of obstructed vessels, and corrected TIMI frame count. OPG, a bidirectional molecule displaying both atheroprotective and pro-atherosclerotic properties, is currently known as a marker of inflammation and a predictor of cardiovascular mortality. The present study, for the first time, demonstrated that an increased OPG level is related to in-hospital adverse cardiovascular events after primary PCI in patients with STEMI. Copyright © 2012 Japanese College of Cardiology. Published by Elsevier Ltd
Nakamura, Shunichi; Kato, Koji; Yoshida, Asuka; Fukuma, Nagaharu; Okumura, Yasuyuki; Ito, Hiroto; Mizuno, Kyoichi
Although attention has recently been focused on the role of psychosocial factors in patients with cardiovascular disease (CVD), the factors that have the greatest influence on prognosis have not yet been elucidated. The aim of this study was to evaluate the effects of depression, anxiety, and anger on the prognosis of patients with CVD. Four hundred fourteen consecutive patients hospitalized with CVD were prospectively enrolled. Depression was evaluated using the Patient Health Questionnaire, anxiety using the Generalized Anxiety Disorder Questionnaire, and anger using the Spielberger Trait Anger Scale. Cox proportional-hazards regression was used to examine the individual effects of depression, anxiety, and anger on a combined primary end point of cardiac death or cardiac hospitalization and on a combined secondary end point of all-cause death or hospitalization during follow-up (median 14.2 months). Multivariate analysis showed that depression was a significant risk factor for cardiovascular hospitalization or death after adjusting for cardiac risk factors and other psychosocial factors (hazard ratio 2.62, p = 0.02), whereas anxiety was not significantly associated with cardiovascular hospitalization or death after adjustment (hazard ratio 2.35, p = 0.10). Anger was associated with a low rate of cardiovascular hospitalization or death (hazard ratio 0.34, p depression in hospitalized patients with CVD is a stronger independent risk factor for adverse cardiac events than either anxiety or anger. Anger may help prevent adverse outcomes. Routine screening for depression should therefore be performed in patients with CVD, and the potential effects of anger in clinical practice should be reconsidered. Copyright © 2013 Elsevier Inc. All rights reserved.
Pavão Ana Luiza B
Full Text Available Abstract Background Adverse events are considered a major international problem related to the performance of health systems. Evaluating the occurrence of adverse events involves, as any other outcome measure, determining the extent to which the observed differences can be attributed to the patient's risk factors or to variations in the treatment process, and this in turn highlights the importance of measuring differences in the severity of the cases. The current study aims to evaluate the association between deaths and adverse events, adjusted according to patient risk factors. Methods The study is based on a random sample of 1103 patient charts from hospitalizations in the year 2003 in 3 teaching hospitals in the state of Rio de Janeiro, Brazil. The methodology involved a retrospective review of patient charts in two stages - screening phase and evaluation phase. Logistic regression was used to evaluate the relationship between hospital deaths and adverse events. Results The overall mortality rate was 8.5%, while the rate related to the occurrence of an adverse event was 2.9% (32/1103 and that related to preventable adverse events was 2.3% (25/1103. Among the 94 deaths analyzed, 34% were related to cases involving adverse events, and 26.6% of deaths occurred in cases whose adverse events were considered preventable. The models tested showed good discriminatory capacity. The unadjusted odds ratio (OR 11.43 and the odds ratio adjusted for patient risk factors (OR 8.23 between death and preventable adverse event were high. Conclusions Despite discussions in the literature regarding the limitations of evaluating preventable adverse events based on peer review, the results presented here emphasize that adverse events are not only prevalent, but are associated with serious harm and even death. These results also highlight the importance of risk adjustment and multivariate models in the study of adverse events.
Full Text Available AbstrakDeteksi terjadinya kejadian yang tidak diharapkan (KTD telah menjadi salah satu tantangan dalam keselamatan pasien oleh karena itu metode untuk mendeteksi terjadinya KTD sangatlah penting untuk meningkatkan keselamatan pasien. Tujuan dari artikel ini adalah untuk membandingkan kelebihan dan kekurangan dari beberapa metode untuk mendeteksi terjadinya KTD di rumah sakit, meliputi review rekam medis, pelaporan insiden secara mandiri, teknologi informasi, dan pelaporan oleh pasien. Studi ini merupakan kajian literatur untuk membandingkan dan menganalisa metode terbaik untuk mendeteksi KTD yang dapat diimplementasikan oleh rumah sakit. Semua dari empat metode telah terbukti mampu untuk mendeteksi terjadinya KTD di rumah sakit, tetapi masing-masing metode mempunyai kelebihan dan kekurangan yang perlu diatasi. Tidak ada satu metode terbaik yang akan memberikan hasil terbaik untuk mendeteksi KTD di rumah sakit. Sehingga untuk mendeteksi lebih banyak KTD yang seharusnya dapat dicegah, atau KTD yang telah terjadi, rumah sakit seharusnya mengkombinasikan lebih dari satu metode untuk mendeteksi, karena masing-masing metode mempunyai sensitivitas berbeda-beda.AbstractDetecting adverse events has become one of the challenges in patient safety thus methods to detect adverse events become critical for improving patient safety. The purpose of this paper is to compare the strengths and weaknesses of several methods of identifying adverse events in hospital, including medical records reviews, self-reported incidents, information technology, and patient self-reports. This study is a literature review to compared and analyzed to determine the best method implemented by the hospital. All of four methods have been proved in their ability in detecting adverse events in hospitals, but each method had strengths and limitations to be overcome. There is no ‘best’ single method that will give the best results for adverse events detection in hospital. Thus to
Andréia Guerra Siman
Full Text Available Abstract OBJECTIVE Understanding the practice of reporting adverse events by health professionals. METHOD A qualitative case study carried out in a teaching hospital with participants of the Patient Safety Center and the nursing team. The collection took place from May to December 2015, and was conducted through interviews, observation and documentary research to treat the data using Content Analysis. RESULTS 31 professionals participated in the study. Three categories were elaborated: The practice of reporting adverse events; Barriers in the effective practice of notifications; The importance of reporting adverse events. CONCLUSION Notification was permeated by gaps in knowledge, fear of punishment and informal communication, generating underreporting. It is necessary to improve the interaction between leaders and professionals, with an emphasis on communication and educational practice.
This thesis aims to assess trends in adverse event and preventable adverse event rates in hospitals in the Netherlands through the time period 2004 –2012. Furthermore patient safety for specific care processes and patient groups are assessed. Patient safety has been high on the international agenda
Aranaz-Andrés, J M; Aibar-Remón, C; Limón-Ramírez, R; Amarilla, A; Restrepo, F R; Urroz, O; Sarabia, O; Inga, R; Santivañez, A; Gonseth-García, J; Larizgoitia-Jauregui, I; Agra-Varela, Y; Terol-García, E
To describe the methodological characteristics of the IBEAS study: adverse events prevalence in Latin American hospitals, with the aim of analysing the magnitude, significance and impact of adverse events (AE); to identify the main problems associated with patient safety AE; to increase the capacity of professionals involved in patient safety; and the setting up of patient safety agendas in the participating countries. A patient safety study launched in 35 Latin American hospitals through the analysis of AE in 5 countries: Argentina, Colombia, Costa Rica, Mexico and Peru, using a cross-sectional study using a review of clinical records as the main method. The implications of using a cross-sectional design when studying AE are described, in terms of resources required, internal validity and usefulness related to risk management. The cross-sectional design seems an efficient methodology in terms of time and resources spent, as well as being easy to carry out. Although the cross-sectional design does not review the all hospital episodes, it is able to provide a reliable estimate of prevalence and to support a surveillance system. Because of a possible survival bias, it is likely that the AE which led to hospital admissions will be overestimated, as well as the health related infections or those adverse events which are difficult to identify if the patient is not examined (e.g. contusions). Communication with the ward staff (if the patient is still hospitalised) help in finding the causality and their prevention. Copyright © 2010 SECA. Published by Elsevier Espana. All rights reserved.
Gallelli, Luca; Siniscalchi, Antonio; Palleria, Caterina; Mumoli, Laura; Staltari, Orietta; Squillace, Aida; Maida, Francesca; Russo, Emilio; Gratteri, Santo; De Sarro, Giovambattista
Drug treatment may be related to the development of adverse drug reactions (ADRs). In this paper, we evaluated the ADRs in patients admitted to Catanzaro Hospital. After we obtained the approval by local Ethical Committee, we performed a retrospective study on clinical records from March 01, 2013 to April 30, 2015. The association between drug and ADR or between drug and drug-drug-interactions (DDIs) was evaluated using the Naranjo's probability scale and Drug Interaction Probability Scale (DIPS), respectively. During the study period, we analyzed 2870 clinical records containing a total of 11,138 prescriptions, and we documented the development of 770 ADRs. The time of hospitalization was significantly higher (P<0.05) in women with ADRs (12.6 ± 1.2 days) with respect to men (11.8± 0.83 days). Using the Naranjo score, we documented a probable association in 78% of these reactions, while DIPS revealed that about 22% of ADRs were related to DDIs. Patients with ADRs received 3052 prescriptions on 11,138 (27.4%) having a mean of 6.1±0.29 drugs that was significantly higher (P<0.01) with respect to patients not experiencing ADRs (mean of 3.4±0.13 drugs). About 19% of ADRs were not diagnosed and were treated as new diseases. Our results indicate that drug administration induces the development of ADRs also during the hospitalization, particularly in elderly women. Moreover, we also documented that ADRs in some patients are under-diagnosed, therefore, it is important to motivate healthcare to report the ADRs in order to optimize the patients' safety. Copyright© Bentham Science Publishers; For any queries, please email at email@example.com.
Ryu, J-H; Kim, E-Y
Objective: To determine and analyse the characteristics of contrast media adverse reactions (CM-ARs) reported in a hospital. Methods: A retrospective review of CM-ARs from the electronic spontaneous adverse drug reaction (ADR) report system between January 2011 and August 2012 was conducted. CM-ARs were evaluated in terms of causality, severity, preventability and affected organs. Also, agreement and correlation among the tools used to evaluate CM-ARs were analysed. Results: The overall reaction rate was 1.5% (n = 286). In total, 269 CM-ARs were identified. For ADR causality, 96.7% (n = 260) and 98.5% (n = 265) were evaluated as “probable” ADR using the Naranjo probability scale and the World Health Organization–Uppsala Monitoring Centre causality categories, whereas 98.1% (n = 264) were evaluated as “certain” with Korean algorithm v. II. Of these, 91.4% (n = 246) were mild in severity and 96.7% (n = 260) were unpreventable. Most patients (n = 233, 86.7%) could be managed with observation and/or simple treatment. The most frequent reaction (n = 383, 79.5%) was dermatological. Spearman's correlation coefficient was 0.667 (p < 0.01), and the agreement was 98.1% between the Naranjo scale and the World Health Organization–Uppsala Monitoring Centre categories. No relationship was seen between CM-AR severity and gender or between in- and outpatients. Conclusion: In our study, most CM-ARs were mild and managed with simple treatment. However, as the number of patients undergoing CT procedures continues to increase, it is essential to identify and observe patients at risk for CM-ARs to prevent severe ADRs. Advances in knowledge: Continuous careful review of reporting and treatment protocols of CM-ARs is needed to prevent morbidity and mortality. PMID:24191123
Tamma, Pranita D; Avdic, Edina; Li, David X; Dzintars, Kathryn; Cosgrove, Sara E
Estimates of the incidence of overall antibiotic-associated adverse drug events (ADEs) in hospitalized patients are generally unavailable. To describe the incidence of antibiotic-associated ADEs for adult inpatients receiving systemic antibiotic therapy. Retrospective cohort of adult inpatients admitted to general medicine wards at an academic medical center. At least 24 hours of any parenteral or oral antibiotic therapy. Medical records of 1488 patients were examined for 30 days after antibiotic initiation for the development of the following antibiotic-associated ADEs: gastrointestinal, dermatologic, musculoskeletal, hematologic, hepatobiliary, renal, cardiac, and neurologic; and 90 days for the development of Clostridium difficile infection or incident multidrug-resistant organism infection, based on adjudication by 2 infectious diseases trained clinicians. In 1488 patients, the median age was 59 years (interquartile range, 49-69 years), and 758 (51%) participants were female. A total of 298 (20%) patients experienced at least 1 antibiotic-associated ADE. Furthermore, 56 (20%) non-clinically indicated antibiotic regimens were associated with an ADE, including 7 cases of C difficile infection. Every additional 10 days of antibiotic therapy conferred a 3% increased risk of an ADE. The most common ADEs were gastrointestinal, renal, and hematologic abnormalities, accounting for 78 (42%), 45 (24%), and 28 (15%) 30-day ADEs, respectively. Notable differences were identified between the incidence of ADEs associated with specific antibiotics. Although antibiotics may play a critical role when used appropriately, our findings underscore the importance of judicious antibiotic prescribing to reduce the harm that can result from antibiotic-associated ADEs.
Gerdes, Lars Ulrik; Hardahl, Christian
Manual reviews of health records to identify possible adverse events are time consuming. We are developing a method based on natural language processing to quickly search electronic health records for common triggers and adverse events. Our results agree fairly well with those obtained using manu...
Katharina Hauck; Xueyan Zhao; Terri Jackson
We compare adverse event rates for surgical inpatients across 36 public hospitals in the state of Victoria, Australia, conditioning on differences in patient complexity across hospitals. We estimate separate models for elective and emergency patients which stay at least one night in hospitals, using fixed effects complementary log-log models to estimate AEs as a function of patient and episode characteristics, and hospital effects. We use 4 years of patient level administrative hospital data ...
Mahoney, Jane E; Webb, Melissa J; Gray, Shelly L
Zolpidem is prescribed for sleep disruption in hospitalized patients, but data on the incidence of adverse drug reactions (ADRs) are based largely on outpatient studies. Thus, the incidence of ADRs in hospitalized patients may be much higher. The goal of this study was to describe prescribing patterns of zolpidem for hospitalized medical patients aged 50 years, the incidence of ADRs possibly and probably associated with its use, and the factors associated with central nervous system (CNS) ADRs. This case series was conducted in 4 general medicine wards at a Veterans Affairs hospital and was a consecutive sample of patients aged 50 years who were hospitalized between 1993 and 1997 and received zolpidem as a hypnotic during hospitalization, but had not received it in the previous 3 months. Chart review was conducted by 2 evaluators. Data extracted from the medical records included admission demographic characteristics, medications, comorbidities, and levels of function in performing basic and instrumental activities of daily living. The main outcome measure was ADRs possibly or probably related to zolpidem use. The association between zolpidem and the occurrence of CNS ADRs (eg, confusion, dizziness, daytime somnolence) was analyzed separately. The review included 119 medical patients aged > or =50 years who had newly received zolpidem for sleep disruption during hospitalization. The median age of the population was 70 years; 86 (72.3%) patients were aged 65 years. The initial zolpidem dose was 5 mg in 42 patients (35.3%) and 10 mg in 77 patients (64.7%). Twenty-three patients had a respective 16 and 10 ADRs possibly and probably related to zolpidem use (19.3% incidence). Of a total of 26 ADRs, 21 (80.8%) were CNS ADRs, occurring with both zolpidem 5 mg (10.8% of users) and 10 mg (18.3% of users). On univariate analyses, the only factor significantly associated with a CNS ADR was functional impairment at baseline (P = 0.003). Zolpidem was discontinued in 38.8% of
Many people associate hospital treatment with ‘getting better’, the restoration to health and normal life. The onset of a lifethreatening disease such as cancer, however, can transform the hospital into a place of constant struggle and suffering. Hospitalisation in this sense coincides with the
Lane, J C; Wright, S; Burch, J; Kennedy, R H; Jenkins, J T
Early identification of patients experiencing postoperative complications is imperative for successful management. C-reactive protein (CRP) is a nonspecific marker of inflammation used in many specialties to monitor patient condition. The role of CRP measurement early in the elective postoperative colorectal patient is unclear, particularly in the context of enhanced recovery (ERAS). Five hundred and thirty-three consecutive patients who underwent elective colorectal surgery between October 2008 and October 2010 within an established ERAS programme were studied. Patients were separated into a development group of 265 patients and a validation group of 268 patients by chronological order. CRP and white cell count were added to a prospectively maintained ERAS database. The primary outcome of the study was all adverse events (including infective complications, postoperative organ dysfunction and prolonged length of stay) during the initial hospital admission. Significant predictors for adverse events on univariate analysis were submitted to multivariate regression analysis and the resulting model applied to the validation group. The validity and predictive accuracy of the regression model was assessed using receiver operating characteristic curve/area under the curve (AUC) analysis. CRP levels >150 mg/l on postoperative day 2 and a rising CRP on day 3 were independently associated with all adverse events during the hospital admission. A weighted model was applied to the validation group yielding an AUC of 0.65 (95% CI 0.58-0.73) indicating, at best, modest discrimination and predictive accuracy for adverse events. Measurement of CRP in patients after elective colorectal surgery in the first few days after surgery within ERAS can assist in identifying those at risk of adverse events and a prolonged hospital stay. A CRP value of >150 mg/l on day 2 and a rising CRP on day 3 should alert the surgeon to an increased likelihood of such events. © 2012 The Authors
Faucon, C; Brillac, T
To assess the safety of planned home birth compared to hospital birth, in low-risk pregnancies. An international literature review was conducted. Mortality, adverse outcomes and medical interventions were compared. Home birth was not associated with higher mortality rates, but with lower maternal adverse outcomes. Perinatal adverse outcomes are not significantly different at home and in hospital. Medical interventions are more frequent in hospital births. Home birth attended by a well-trained midwife is not associated with increased mortality and morbidity rates, but with less medical interventions. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Hammann, F; Gutmann, H; Vogt, N; Helma, C; Drewe, J
Drug safety is of great importance to public health. The detrimental effects of drugs not only limit their application but also cause suffering in individual patients and evoke distrust of pharmacotherapy. For the purpose of identifying drugs that could be suspected of causing adverse reactions, we present a structure-activity relationship analysis of adverse drug reactions (ADRs) in the central nervous system (CNS), liver, and kidney, and also of allergic reactions, for a broad variety of drugs (n = 507) from the Swiss drug registry. Using decision tree induction, a machine learning method, we determined the chemical, physical, and structural properties of compounds that predispose them to causing ADRs. The models had high predictive accuracies (78.9-90.2%) for allergic, renal, CNS, and hepatic ADRs. We show the feasibility of predicting complex end-organ effects using simple models that involve no expensive computations and that can be used (i) in the selection of the compound during the drug discovery stage, (ii) to understand how drugs interact with the target organ systems, and (iii) for generating alerts in postmarketing drug surveillance and pharmacovigilance.
Evanthia E. Tripoliti
Full Text Available Heart failure is a serious condition with high prevalence (about 2% in the adult population in developed countries, and more than 8% in patients older than 75 years. About 3–5% of hospital admissions are linked with heart failure incidents. Heart failure is the first cause of admission by healthcare professionals in their clinical practice. The costs are very high, reaching up to 2% of the total health costs in the developed countries. Building an effective disease management strategy requires analysis of large amount of data, early detection of the disease, assessment of the severity and early prediction of adverse events. This will inhibit the progression of the disease, will improve the quality of life of the patients and will reduce the associated medical costs. Toward this direction machine learning techniques have been employed. The aim of this paper is to present the state-of-the-art of the machine learning methodologies applied for the assessment of heart failure. More specifically, models predicting the presence, estimating the subtype, assessing the severity of heart failure and predicting the presence of adverse events, such as destabilizations, re-hospitalizations, and mortality are presented. According to the authors' knowledge, it is the first time that such a comprehensive review, focusing on all aspects of the management of heart failure, is presented. Keywords: Heart failure, Diagnosis, Prediction, Severity estimation, Classification, Data mining
Buyuktiryaki, A Betul; Civelek, Ersoy; Can, Demet; Orhan, Fazıl; Aydogan, Metin; Reisli, Ismail; Keskin, Ozlem; Akcay, Ahmet; Yazicioglu, Mehtap; Cokugras, Haluk; Yuksel, Hasan; Zeyrek, Dost; Kocak, A Kadir; Sekerel, Bulent E
Acute asthma is one of the most common medical emergencies in children. Appropriate assessment/treatment and early identification of factors that predict hospitalization are critical for the effective utilization of emergency services. To identify risk factors that predict hospitalization and to compare the concordance of the Modified Pulmonary Index Score (MPIS) with the Global Initiative for Asthma (GINA) guideline criteria in terms of attack severity. The study population was composed of children aged 5-18 years who presented to the Emergency Departments (ED) of the tertiary reference centers of the country within a period of 3 months. Patients were evaluated at the initial presentation and the 1(st) and 4(th) hours. Of the 304 patients (median age: 8.0 years [interquartile range: 6.5-9.7]), 51.3% and 19.4% required oral corticosteroids (OCS) and hospitalization, respectively. Attack severity and MPIS were found as predicting factors for hospitalization, but none of the demographic characteristics collected predicted OCS use or hospitalization. Hospitalization status at the 1(st) hour with moderate/severe attack severity showed a sensitivity of 44.1%, specificity of 82.9%, positive predictive value of 38.2%, and negative predictive value of 86.0%; for MPIS ≥ 5, these values were 42.4%, 85.3%, 41.0%, and 86.0%, respectively. Concordance in prediction of hospitalization between the MPIS and the GINA guideline was found to be moderate at the 1(st) hour (κ = 0.577). Attack severity is a predictive factor for hospitalization in children with acute asthma. Determining attack severity with MPIS and a cut-off value ≥ 5 at the 1(st) hour may help physicians in EDs. Having fewer variables and the ability to calculate a numeric value with MPIS makes it an easy and useful tool in clinical practice. Copyright © 2013 Elsevier Inc. All rights reserved.
Allué, Natalia; Chiarello, Pietro; Bernal Delgado, Enrique; Castells, Xavier; Giraldo, Priscila; Martínez, Natalia; Sarsanedas, Eugenia; Cots, Francesc
To evaluate the incidence and costs of adverse events registered in an administrative dataset in Spanish hospitals from 2008 to 2010. A retrospective study was carried out that estimated the incremental cost per episode, depending on the presence of adverse events. Costs were obtained from the database of the Spanish Network of Hospital Costs. This database contains data from 12 hospitals that have costs per patient records based on activities and clinical records. Adverse events were identified through the Patient Safety Indicators (validated in the Spanish Health System) created by the Agency for Healthcare Research and Quality together with indicators of the EuroDRG European project. This study included 245,320 episodes with a total cost of 1,308,791,871€. Approximately 17,000 patients (6.8%) experienced an adverse event, representing 16.2% of the total cost. Adverse events, adjusted by diagnosis-related groups, added a mean incremental cost of between €5,260 and €11,905. Six of the 10 adverse events with the highest incremental cost were related to surgical interventions. The total incremental cost of adverse events was € 88,268,906, amounting to an additional 6.7% of total health expenditure. Assessment of the impact of adverse events revealed that these episodes represent significant costs that could be reduced by improving the quality and safety of the Spanish Health System. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Gutiérrez-Mendoza, Luis Meave; Torres-Montes, Abraham; Soria-Orozco, Manuel; Padrón-Salas, Aldanely; Ramírez-Hernández, María Elizabeth
Serious adverse events during hospital care are a worldwide reality and threaten the safety of the hospitalised patient. To identify serious adverse events related to healthcare and direct hospital costs in a Teaching Hospital in México. A study was conducted in a 250-bed Teaching Hospital in San Luis Potosi, Mexico. Data were obtained from the Quality and Patient Safety Department based on 2012 incidents report. Every event was reviewed and analysed by an expert team using the "fish bone" tool. The costs were calculated since the event took place until discharge or death of the patient. A total of 34 serious adverse events were identified. The average cost was $117,440.89 Mexican pesos (approx. €7,000). The great majority (82.35%) were largely preventable and related to the process of care. Undergraduate medical staff were involved in 58.82%, and 14.7% of patients had suffered adverse events in other hospitals. Serious adverse events in a Teaching Hospital setting need to be analysed to learn and deploy interventions to prevent and improve patient safety. The direct costs of these events are similar to those reported in developed countries. Copyright © 2015 Academia Mexicana de Cirugía A.C. Published by Masson Doyma México S.A. All rights reserved.
Latha, S; Choon, S E
Cutaneous adverse drug reactions (cADRs) are common. There are only few studies on the incidence of cADRs in Malaysia. To determine the incidence, clinical features and risk factors of cADRs among hospitalized patients. A prospective study was conducted among medical inpatients from July to December 2014. A total of 43 cADRs were seen among 11 017 inpatients, yielding an incidence rate of 0.4%. cADR accounted for hospitalization in 26 patients. Previous history of cADR was present in 14 patients, with 50% exposed to the same drug taken previously. Potentially lifethreatening severe cutaneous adverse reactions (SCAR), namely drug reaction with eosinophilia and systemic symptoms (DRESS: 14 cases) and Stevens-Johnson Syndrome/Toxic Epidermal Necrolysis (SJS/TEN: 6 cases) comprise almost 50% of cADRs. The commonest culprit drug group was antibiotics (37.2%), followed by anticonvulsants (18.6%). Cotrimoxazole, phenytoin and rifampicin were the main causative drugs for DRESS. Anticonvulsants were most frequently implicated in SJS/TEN (66.7%). Most cases had "probable" causality relationship with suspected drug (69.8%). The majority of cases were of moderate severity (65.1%), while 18.6% had severe reaction with 1 death recorded. Most cases were not preventable (76.7%). Older age (> 60 years) and mucosal involvement were significantly associated with a more severe reaction. The incidence of cADRs was 0.4%, with most cases classified as moderate severity and not preventable. The commonest reaction pattern was DRESS, while the main culprit drug group was antibiotics. Older age and mucosal membrane involvement predicts a severe drug reaction.
Full Text Available Stacy Ackroyd-Stolarz,1,2 Susan K Bowles,3–5 Lorri Giffin6 1Performance Excellence Portfolio, Capital District Health Authority, Halifax, Nova Scotia, Canada; 2Department of Emergency Medicine, Dalhousie University, Halifax, Nova Scotia, Canada; 3Geriatric Medicine, Capital District Health Authority, Halifax, Nova Scotia, Canada; 4College of Pharmacy and Division of Geriatric Medicine, Dalhousie University, Halifax, Nova Scotia, Canada; 5Department of Pharmacy at Capital District Health Authority, Halifax, Nova Scotia, Canada; 6South Shore Family Health, Bridgewater, Nova Scotia, Canada Abstract: Older hospitalized patients are at risk of experiencing adverse events including, but not limited to, hospital-acquired pressure ulcers, fall-related injuries, and adverse drug events. A significant challenge in monitoring and managing adverse events is lack of readily accessible information on their occurrence. Purpose: The objective of this retrospective cross-sectional study was to validate diagnostic codes for pressure ulcers, fall-related injuries, and adverse drug events found in routinely collected administrative hospitalization data. Methods: All patients 65 years of age or older discharged between April 1, 2009 and March 31, 2011 from a provincial academic health sciences center in Canada were eligible for inclusion in the validation study. For each of the three types of adverse events, a random sample of 50 patients whose records were positive and 50 patients whose records were not positive for an adverse event was sought for review in the validation study (n=300 records in total. A structured health record review was performed independently by two health care providers with experience in geriatrics, both of whom were unaware of the patient's status with respect to adverse event coding. A physician reviewed 40 records (20 reviewed by each health care provider to establish interrater agreement. Results: A total of 39 pressure ulcers, 56 fall
Borch, Jakob E; Andersen, Klaus E; Bindslev-Jensen, Carsten
Patients with suspected cutaneous adverse drug reactions are often referred to allergy clinics or departments of dermatology for evaluation. These patients are selected compared with patients identified in prospective and cross-sectional studies of hospital populations. This explains the observed...... variation in prevalence of specific reactions and of eliciting drugs. This study investigated the prevalence of cutaneous adverse drug reactions in a university hospital department of dermatology that is specially focused on allergy. An 8-month survey was carried out during the period April-December 2003...
Mulchand Shende *, Sneha Gawali , Kanchan Bhongade , Vivek Bhuskade , Abhijit Nandgaonkar
Snake bite is a common predominant problem of the rural and periurban areas, neglected and frequently devastating environmental and occupational disease, especially in rural areas of tropical developing countries. This study aimed to investigate of the adverse drug reaction profile of anti-snake venom (ASV) in a district general hospital. An observational study was conducted in hospital for six months. A total number of 142 indoor case papers of snake bite from October 2016 to April 2017 were...
Odagiri, Hiroyuki; Yasunaga, Hideo; Matsui, Hiroki; Matsui, Shigeru; Fushimi, Kiyohide; Kaise, Mitsuru
Background and study aims Esophageal endoscopic submucosal dissection (ESD) has gradually acquired popularity as a minimally invasive surgery for early cancers not only in Japan, but also in other countries. However, most reported outcomes have been based on relatively small samples of patients from specialized centers. Therefore, the association between hospital volume and the rate of adverse events following esophageal ESD has been poorly understood. Patients and methods Using a nationwide administrative database in Japan, we identified patients who underwent esophageal ESD between 1 July 2007 and 31 March 2013. Hospital volume was defined as the number of esophageal ESD procedures performed per year at each hospital and was categorized into quartiles. Results In total, 12 899 esophageal ESD procedures at 699 institutions were identified during the study period. Perforation and perforation-related disorders were observed in 422 patients (3.3 %), and one patient died after perforation. There was a significant association between a lower hospital volume and a higher proportion of adverse events following esophageal ESD. Although not statistically significant, a similar tendency was observed in the occurrence of blood transfusion within 1 week after ESD and all-cause in-hospital death. Multivariable logistic regression analysis showed that hospitals with very high case volumes were less likely to experience adverse events following esophageal ESD than hospitals with very low volumes. Conclusions The proportion of perforation and perforation-related disorders following esophageal ESD was permissibly low, and there was a linear association between higher hospital volume and lower rates of adverse events following esophageal ESD. © Georg Thieme Verlag KG Stuttgart · New York.
Atiqi, R.; Cleophas, T. J.; van Bommel, E.; Zwinderman, A. H.
The use of drugs has expanded during the previous decade. However, earlier studies oil patients admitted for adverse drugs effects (ADEs) have been heterogeneous. The objectives of this Study were to assess the number of recent admissions to hospital Clue to ADEs and to assess the degree of
Leah L. Shever
Full Text Available The purpose of this study was to examine factors that contribute to adverse incidents by creating a model that included patient characteristics, clinical conditions, nursing unit context of care variables, medical treatments, pharmaceutical treatments, and nursing treatments. Data were abstracted from electronic, administrative, and clinical data repositories. The sample included older adults hospitalized during a four-year period at one, academic medical facility in the Midwestern United States who were at risk for falling. Relational databases were built and a multistep, statistical model building analytic process was used. Total registered nurse (RN hours per patient day (HPPD and HPPDs dropping below the nursing unit average were significant explanatory variables for experiencing an adverse incident. The number of medical and pharmaceutical treatments that a patient received during hospitalization as well as many specific nursing treatments (e.g., restraint use, neurological monitoring were also contributors to experiencing an adverse incident.
Lord, Kito; Parwani, Vivek; Ulrich, Andrew; Finn, Emily B; Rothenberg, Craig; Emerson, Beth; Rosenberg, Alana; Venkatesh, Arjun K
Overcrowding in the emergency department (ED) has been associated with patient harm, yet little is known about the association between ED boarding and adverse hospitalization outcomes. We sought to examine the association between ED boarding and three common adverse hospitalization outcomes: rapid response team activation (RRT), escalation in care, and mortality. We conducted an observational analysis of consecutive patient encounters admitted from the ED to the general medical service between February 2013 and June 2015. This study was conducted in an urban, academic hospital with an annual adult ED census over 90,000. We defined boarding as patients with greater than 4h from ED bed order to ED departure to hospital ward. The primary outcome was a composite of adverse outcomes in the first 24h of admission, including RRT activation, care escalation to intensive care, or in-hospital mortality. A total of 31,426 patient encounters were included of which 3978 (12.7%) boarded in the ED for 4h or more. Adverse outcomes occurred in 1.92% of all encounters. Comparing boarded vs. non-boarded patients, 41 (1.03%) vs. 244 (0.90%) patients experienced a RRT activation, 53 (1.33%) vs. 387 (1.42%) experienced a care escalation, and 1 (0.03%) vs.12 (0.04%) experienced unanticipated in-hospital death, within 24h of ED admission. In unadjusted analysis, there was no difference in the composite outcome between boarding and non-boarding patients (1.91% vs. 1.91%, p=0.994). Regression analysis adjusted for patient demographics, acuity, and comorbidities also showed no association between boarding and the primary outcome. A sensitivity analysis showed an association between ED boarding and the composite outcome inclusive of the entire inpatient hospital stay (5.8% vs. 4.7%, p=0.003). Within the first 24h of hospital admission to a general medicine service, adverse hospitalization outcomes are rare and not associated with ED boarding. Copyright © 2018 Elsevier Inc. All rights
Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E
Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge
Guzmán-Ruiz, O; Ruiz-López, P; Gómez-Cámara, A; Ramírez-Martín, M
To identify and characterize adverse events (AE) in an Internal Medicine Department of a district hospital using an extension of the Global Trigger Tool (GTT), analyzing the diagnostic validity of the tool. An observational, analytical, descriptive and retrospective study was conducted on 2013 clinical charts from an Internal Medicine Department in order to detect EA through the identification of 'triggers' (an event often related to an AE). The 'triggers' and AE were located by systematic review of clinical documentation. The AE were characterized after they were identified. A total of 149 AE were detected in 291 clinical charts during 2013, of which 75.3% were detected directly by the tool, while the rest were not associated with a trigger. The percentage of charts that had at least one AE was 35.4%. The most frequent AE found was pressure ulcer (12%), followed by delirium, constipation, nosocomial respiratory infection and altered level of consciousness by drugs. Almost half (47.6%) of the AE were related to drug use, and 32.2% of all AE were considered preventable. The tool demonstrated a sensitivity of 91.3% (95%CI: 88.9-93.2) and a specificity of 32.5% (95%CI: 29.9-35.1). It had a positive predictive value of 42.5% (95%CI: 40.1-45.1) and a negative predictive value of 87.1% (95%CI: 83.8-89.9). The tool used in this study is valid, useful and reproducible for the detection of AE. It also serves to determine rates of injury and to observe their progression over time. A high frequency of both AE and preventable events were observed in this study. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.
Cao, Yingjuan; Ball, Marion
Based on the System Development Life Cycle, a hospital based nursing adverse event reporting system was developed and implemented which integrated with the current Hospital Information System (HIS). Besides the potitive outcomes in terms of timeliness and efficiency, this approach has brought an enormous change in how the nurses report, analyze and respond to the adverse events.
Wagner, J T; Meier, C; Higdon, T
Adverse events occur in a significant, but undetermined, number of hospitalized patients. These types of patient injuries are more often the result of faulty systems than human maleficence. A culture exists among health care providers that discourages the reporting of such events and resists the implementation of formal efforts to eliminate them. This resistance serves to perpetuate the problem. Both business and clinical ethics argue that sound reasons exist for hospitals to reduce, if not eliminate, adverse events. To do so is cost effective, particularly in a managed care environment. It is also at the heart of responsible professional behavior. Physicians are afforded an opportunity to be at the forefront in this quality improvement effort.
Wilińska, Maria; Warakomska, Małgorzata; Głuszczak-Idziakowska, Ewa; Jackowska, Teresa
There are significant delays in implementing vaccination among preterm infants. Description of the frequency and kinds of adverse events following immunization in preterms. Establishment of the group of preterms who will distinctively be susceptible to adverse events. Demographical, clinical data and the occurrence of adverse events after DTaP, HIB and pneumococcal vaccination among preterms during their initial hospitalization were prospectively collected with the use of an electronic data form between 1st June 2011 and 31st May 2015. The analysis was conducted on 138 patients. The groups were divided according to maturity (I: ≤ GA 28w n=73 and GA 29-36 w n=65). There were no statistically significant differences between the groups in the occurrence of adverse events. Out of the total group, following vaccination apnoea developed in 6 newborns (4%) and activity dysfunctions were observed in 13 newborns (10%). The occurrence of apnoea after vaccination positively correlated with the time of non-invasive ventilation and the occurrence of late infection. There were no statistically significant demographical or clinical risk factors for the development of activity dysfunctions following vaccination. Term vaccination in clinically stable preterm infants is a safe medical procedure. However, long-term non-invasive respiratory support and late infections are risk factors for apnea following vaccinations. In these patients vaccinations should be considered during hospitalization.
Martin, Rodney A.; Santanu, Das; Janakiraman, Vijay Manikandan; Hosein, Stefan
The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we introduce a generic framework developed in MATLAB (sup registered mark) called ACCEPT (Adverse Condition and Critical Event Prediction Toolbox). ACCEPT is an architectural framework designed to compare and contrast the performance of a variety of machine learning and early warning algorithms, and tests the capability of these algorithms to robustly predict the onset of adverse events in any time-series data generating systems or processes.
Wentzell, Jason; Nguyen, Tiffany; Bui, Stephanie; MacDonald, Erika
Health Canada relies on health professionals to voluntarily report adverse reactions to the Canada Vigilance Program. Current rates of reporting adverse drug reactions (ADRs) are inadequate to detect important safety issues. To assess the impact of pharmacy student facilitation of ADR reporting by pharmacists at a tertiary care teaching hospital in Canada. The intervention of interest, implemented at one campus of the hospital, was facilitation of ADR reporting by pharmacy students. The students received training on how to submit ADR reports and presented information sessions on the topic to hospital pharmacists; the pharmacists were then encouraged to report ADRs to a designated student for formal reporting. Frequency of reporting by pharmacists at the intervention campus was compared with reporting at a control campus of the same hospital. Data were collected prospectively over a 6-month pilot period, starting in April 2015. During the pilot period, 27 ADR reports were submitted at the intervention campus, and 3 reports at the control campus. All student participants strongly agreed that they would recommend that responsibility for submitting ADR reports to the Canada Vigilance Program remain with pharmacy students during future rotations. Availability of a pharmacy student to facilitate reporting of ADRs may increase the frequency of ADR reporting and could alleviate pharmacist workload; this activity is also a potentially valuable learning experience for students.
Full Text Available As a consequence of demographic changes, hospitals are confronted with increasing numbers of elderly patients, who are at high risk of adverse events during hospitalization. Geriatric risk screening followed by comprehensive geriatric assessment (CGA and treatment has been requested by geriatric societies and task forces to identify patients at risk. Since empirical evidence on factors predisposing to adverse hospital events is scarce, we now prospectively evaluated implications of geriatric risk screening followed by CGA in a university hospital department of orthopedics and trauma surgery.Three hundred and eighty-one patients ≥75 years admitted to the Department of Orthopedics and Trauma Surgery of the University Hospital Essen received Identification of Seniors at Risk (ISAR Screening followed by CGA via a geriatric liaison service in case of positive screening results. Associations between ISAR, CGA, comorbid risk factors and diseases, length of hospital stay, number of nursing and physiotherapy hours, and falls during hospital stay were analyzed.Of 381 ISAR screenings, 327 (85.8% were positive, confirming a high percentage of patients at risk of adverse events. Of these, 300 CGAs revealed 82.7% abnormal results, indicating activities of daily living impairment combined with cognitive, emotional or mobility disturbances. Abnormal CGA resulted in a longer hospital stay (14.0±10.3 days in ISAR+/CGA abnormal compared with 7.6±7.0 days in ISAR+/CGA normal and 8.1±5.4 days in ISAR-, both p<0.001, increased nursing hours (3.4±1.1 hours/day in ISAR+/CGA abnormal compared with 2.5±1.0 hours/day in ISAR+/CGA normal and 2.2±0.8 hours/day in ISAR-, both p<0.001, and increased falls (7.3% in ISAR+/CGA abnormal, 0% in ISAR+/CGA normal, 1.9% in ISAR-. Physiotherapy hours were only significantly increased in ISAR+/CGA abnormal (3.0±2.7 hours compared with in ISAR+/CGA normal (1.6±1.4 hours, p<0.001 whereas the comparison with ISAR- (2.4±2
Zhong, Qiu-Yue; Gelaye, Bizu; Smoller, Jordan W; Avillach, Paul; Cai, Tianxi; Williams, Michelle A
The effects of suicidal behavior on obstetric outcomes remain dangerously unquantified. We sought to report on the risk of adverse obstetric outcomes for US women with suicidal behavior at the time of delivery. We performed a cross-sectional analysis of delivery hospitalizations from 2007-2012 National (Nationwide) Inpatient Sample. From the same hospitalization record, International Classification of Diseases codes were used to identify suicidal behavior and adverse obstetric outcomes. Adjusted odds ratios (aOR) and 95% confidence intervals (CI) were obtained using logistic regression. Of the 23,507,597 delivery hospitalizations, 2,180 were complicated by suicidal behavior. Women with suicidal behavior were at a heightened risk for outcomes including antepartum hemorrhage (aOR = 2.34; 95% CI: 1.47-3.74), placental abruption (aOR = 2.07; 95% CI: 1.17-3.66), postpartum hemorrhage (aOR = 2.33; 95% CI: 1.61-3.37), premature delivery (aOR = 3.08; 95% CI: 2.43-3.90), stillbirth (aOR = 10.73; 95% CI: 7.41-15.56), poor fetal growth (aOR = 1.70; 95% CI: 1.10-2.62), and fetal anomalies (aOR = 3.72; 95% CI: 2.57-5.40). No significant association was observed for maternal suicidal behavior with cesarean delivery, induction of labor, premature rupture of membranes, excessive fetal growth, and fetal distress. The mean length of stay was longer for women with suicidal behavior. During delivery hospitalization, women with suicidal behavior are at increased risk for many adverse obstetric outcomes, highlighting the importance of screening for and providing appropriate clinical care for women with suicidal behavior during pregnancy.
Egedorf, Søren; Shaker, Hamid Reza
, or even to the environment. To cope with these, adverse condition and critical event prediction plays an important role. Adverse Condition and Critical Event Prediction Toolbox (ACCEPT) is a tool which has been recently developed by NASA to allow for a timely prediction of an adverse event, with low false...... alarm and missed detection rates. While ACCEPT has shown to be an effective tool in some applications, its performance has not yet been evaluated on practical well-known benchmark examples. In this paper, ACCEPT is used for adverse condition and critical event prediction in a multiphase flow facility....... Cranfield multiphase flow facility is known to be an interesting benchmark which has been used to evaluate different methods from statistical process monitoring. In order to allow for the data from the flow facility to be used in ACCEPT, methods such as Kernel Density Estimation (KDE), PCA-and CVA...
Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki
The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.
Full Text Available Functional decline after hospitalization is a common adverse outcome in elderly. An easy to use, reproducible and accurate tool to identify those at risk would aid focusing interventions in those at higher risk. Handgrip strength has been shown to predict adverse outcomes in other settings. The aim of this study was to determine if handgrip strength measured upon admission to an acute care facility would predict functional decline (either incident or worsening of preexisting at discharge among older Mexican, stratified by gender. In addition, cutoff points as a function of specificity would be determined. A cohort study was conducted in two hospitals in Mexico City. The primary endpoint was functional decline on discharge, defined as a 30-point reduction in the Barthel Index score from that of the baseline score. Handgrip strength along with other variables was measured at initial assessment, including: instrumental activities of daily living, cognition, depressive symptoms, delirium, hospitalization length and quality of life. All analyses were stratified by gender. Logistic regression to test independent association between handgrip strength and functional decline was performed, along with estimation of handgrip strength test values (specificity, sensitivity, area under the curve, etc.. A total of 223 patients admitted to an acute care facility between 2007 and 2009 were recruited. A total of 55 patients (24.7% had functional decline, 23.46% in male and 25.6% in women. Multivariate analysis showed that only males with low handgrip strength had an increased risk of functional decline at discharge (OR 0.88, 95% CI 0.79-0.98, p = 0.01, with a specificity of 91.3% and a cutoff point of 20.65 kg for handgrip strength. Females had not a significant association between handgrip strength and functional decline. Measurement of handgrip strength on admission to acute care facilities may identify male elderly patients at risk of having functional decline
García-Peña, Carmen; García-Fabela, Luis C.; Gutiérrez-Robledo, Luis M.; García-González, Jose J.; Arango-Lopera, Victoria E.; Pérez-Zepeda, Mario U.
Functional decline after hospitalization is a common adverse outcome in elderly. An easy to use, reproducible and accurate tool to identify those at risk would aid focusing interventions in those at higher risk. Handgrip strength has been shown to predict adverse outcomes in other settings. The aim of this study was to determine if handgrip strength measured upon admission to an acute care facility would predict functional decline (either incident or worsening of preexisting) at discharge among older Mexican, stratified by gender. In addition, cutoff points as a function of specificity would be determined. A cohort study was conducted in two hospitals in Mexico City. The primary endpoint was functional decline on discharge, defined as a 30-point reduction in the Barthel Index score from that of the baseline score. Handgrip strength along with other variables was measured at initial assessment, including: instrumental activities of daily living, cognition, depressive symptoms, delirium, hospitalization length and quality of life. All analyses were stratified by gender. Logistic regression to test independent association between handgrip strength and functional decline was performed, along with estimation of handgrip strength test values (specificity, sensitivity, area under the curve, etc.). A total of 223 patients admitted to an acute care facility between 2007 and 2009 were recruited. A total of 55 patients (24.7%) had functional decline, 23.46% in male and 25.6% in women. Multivariate analysis showed that only males with low handgrip strength had an increased risk of functional decline at discharge (OR 0.88, 95% CI 0.79–0.98, p = 0.01), with a specificity of 91.3% and a cutoff point of 20.65 kg for handgrip strength. Females had not a significant association between handgrip strength and functional decline. Measurement of handgrip strength on admission to acute care facilities may identify male elderly patients at risk of having functional decline, and
Full Text Available Justine Nicholls,1 Craig MacKenzie,1 Rhiannon Braund2 1Dunedin Hospital Pharmacy, 2School of Pharmacy, University of Otago, Dunedin, New Zealand Abstract: Transition of care (ToC points, and in particular hospital admission and discharge, can be associated with an increased risk of adverse drug events (ADEs and other drug-related problems (DRPs. The growing recognition of the pharmacist as an expert in medication management, patient education and communication makes them well placed to intervene. There is evidence to indicate that the inclusion of pharmacists in the health care team at ToC points reduces ADEs and DRPs and improves patient outcomes. The objectives of this paper are to outline the following using current literature: 1 the increased risk of medication-related problems at ToC points; 2 to highlight some strategies that have been successful in reducing these problems; and 3 to illustrate how the role of the pharmacist across all facets of care can contribute to the reduction of ADEs, particularly for patients at ToC points. Keywords: pharmacist, adverse drug events, drug-related problems, transitions of care, hospital discharge
Santucci, R; Levêque, D; Herbrecht, R; Fischbach, M; Gérout, A C; Untereiner, C; Bouayad-Agha, K; Couturier, F
The medication iatrogenic events are responsible for nearly one iatrogenic event in five. The main purpose of this prospective multicenter study is to determine the effect of pharmaceutical consultations on the occurrence of medication adverse events during hospitalization (MAE). The other objectives are to study the impact of age, of the number of medications and pharmaceutical consultations on the risk of MAE. The pharmaceutical consultation is associated to a complete reassessment done by both a physician and a pharmacist for the home medication, the hospital treatment (3days after admission), the treatment during chemotherapy, and/or, the treatment when the patient goes back home. All MAE are subject to an advice for the patient, additional clinical-biological monitoring and/or prescription changes. Among the 318 patients, 217 (68%) had 1 or more clinically important MAE (89% drug-drug interaction, 8% dosing error, 2% indication error, 1% risk behavior). The patients have had 1121 pharmaceutical consultations (3.2±1.4/patient). Thus, the pharmaceutical consultations divided by 2.34 the risk of MAE (unadjusted incidence ratio, P≤0.05). Each consultation decreased by 24% the risk of MAE. Moreover, adding one medication increases from 14 to 30% as a risk of MAE on the population. Pharmaceutical consultations during the hospital stay could reduce significantly the number of medication adverse effects. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Mira, José Joaquín; Carrillo, Irene; Lorenzo, Susana
To explore what hospitals and primary care (PC) are doing to reduce the negative social impact of a serious adverse event (AE). We surveyed 195 hospital (n=113) and PC (n=82) managers from eight autonomous communities to explore the level of implementation of five interventions recommended after an AE to protect the reputation of healthcare institutions. Most institutions (70, 45.2% PC, and 85, 54.8% hospitals) did not have a crisis plan to protect their reputation after an AE. Internal (p=0.0001) and external (p=0.012) communications were addressed better in PC than in hospitals. Very few institutions had defined the managers' role in case of an AE (10.7% hospitals versus 6.25% PC). A majority of healthcare institutions have not planned crisis intervention after an AE with severe consequences nor have they defined plans to recover citizens' trust after an AE. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Andersen, Henning Boje; Siemsen, Inger Margrete D.; Petersen, Lene Funck
Patient Safety Database, 200 events) and 47 interviews with staff conducted at a large hospital in the Capital Region (232 events). The most prevalent causes of adverse events are inadequate competence (30 %), inadequate infrastructure (22 %) and busy ward (18 %). Inter-rater reliability (kappa) was 0.......76 and 0.87 for reports and interviews, respectively. Communication in clinical contexts has been widely recognized as giving rise to potentially hazardous events, and handover situations are particularly prone to failures of communication or unclear allocation of responsibility. The taxonomy provides...... a tool for analyzing adverse handover events to identify frequent causes among reported handover failures. In turn, this provides a basis for selecting safety measures including handover protocols and training programmes....
Denardo, Scott J; Vock, David M; Schmalfuss, Carsten M; Young, Gregory D; Tcheng, James E; O'Connor, Christopher M
Contrast media administered during cardiac catheterization can affect hemodynamic variables. However, little is documented about the effects of contrast on hemodynamics in heart failure patients or the prognostic value of baseline and changes in hemodynamics for predicting subsequent adverse events. In this prospective study of 150 heart failure patients, we measured hemodynamics at baseline and after administration of iodixanol or iopamidol contrast. One-year Kaplan-Meier estimates of adverse event-free survival (death, heart failure hospitalization, and rehospitalization) were generated, grouping patients by baseline measures of pulmonary capillary wedge pressure (PCWP) and cardiac index (CI), and by changes in those measures after contrast administration. We used Cox proportional hazards modeling to assess sequentially adding baseline PCWP and change in CI to 5 validated risk models (Seattle Heart Failure Score, ESCAPE [Evaluation Study of Congestive Heart Failure and Pulmonary Artery Catheterization Effectiveness], CHARM [Candesartan in Heart Failure: Assessment of Reduction in Mortality and Morbidity], CORONA [Controlled Rosuvastatin Multinational Trial in Heart Failure], and MAGGIC [Meta-Analysis Global Group in Chronic Heart Failure]). Median contrast volume was 109 mL. Both contrast media caused similarly small but statistically significant changes in most hemodynamic variables. There were 39 adverse events (26.0%). Adverse event rates increased using the composite metric of baseline PCWP and change in CI (Pcontrast correlated with the poorest prognosis. Adding both baseline PCWP and change in CI to the 5 risk models universally improved their predictive value (P≤0.02). In heart failure patients, the administration of contrast causes small but significant changes in hemodynamics. Calculating baseline PCWP with change in CI after contrast predicts adverse events and increases the predictive value of existing models. Patients with elevated baseline PCWP and
Petersen, John Asger; Mackel, Rebecca; Antonsen, Kristian
AIM: To evaluate the performance of a new early warning score (EWS) system by reviewing all serious adverse events in our hospital over a 6-month time period. METHOD: All incidents of unexpected death (UD), cardiac arrest (CA) and unanticipated intensive care unit admission(UICU) of adult patients...... of EWS were recorded in 87, 94 and 75% of UICU, CA and UD. Patients were monitored according to the escalation protocol in 13, 31 and 13% of UICU, CA and UD. Nurses escalated care and contacted physicians in 64% and 60% of events of UICU and the corresponding proportions for CO were 58% and 55%. On call...
Shalviri, Gloria; Yazdizadeh, Bahareh; Mirbaha, Fariba; Gholami, Kheirollah; Majdzadeh, Reza
Adverse drug events (ADEs) may cause serious injuries including death. Spontaneous reporting of ADEs plays a great role in detection and prevention of them; however, underreporting always exists. Although several interventions have been utilized to solve this problem, they are mainly based on experience and the rationale for choosing them has no theoretical base. The vast variety of behavioural theories makes it difficult to choose appropriate theory. Theoretical domains framework (TDF) is suggested as a solution. The objective of this study was to select the best theory for evaluating ADE reporting in hospitals based on TDF. We carried out three focus group discussions with hospital pharmacists and nurses, based on TDF questions. The analysis was performed through five steps including coding discussions transcript, extracting beliefs, selecting relevant domains, matching related constructs to the extracted beliefs, and determining the appropriate theories in each domain. The theory with the highest number of matched domains and constructs was selected as the theory of choice. A total of six domains were identified relevant to ADE reporting, including "Knowledge", "Skills", "Beliefs about consequences", "Motivation and goals", "Environmental context and resources" and "Social influences". We found theory of planned behavior as the comprehensive theory to study factors influencing ADE reporting in hospitals, since it was relevant theory in five out of six relevant domains and the common theory in 55 out of 75 identified beliefs. In conclusion, we suggest theory of planned behavior for further studies on designing appropriate interventions to increase ADE reporting in hospitals.
Furukawa, Michael F; Spector, William D; Rhona Limcangco, M; Encinosa, William E
Nationwide initiatives have promoted greater adoption of health information technology as a means to reduce adverse drug events (ADEs). Hospital adoption of electronic health records with Meaningful Use (MU) capabilities expected to improve medication safety has grown rapidly. However, evidence that MU capabilities are associated with declines in in-hospital ADEs is lacking. Data came from the 2010-2013 Medicare Patient Safety Monitoring System and the 2008-2013 Healthcare Information and Management Systems Society (HIMSS) Analytics Database. Two-level random intercept logistic regression was used to estimate the association of MU capabilities and occurrence of ADEs, adjusting for patient characteristics, hospital characteristics, and year of observation. Rates of in-hospital ADEs declined by 19% from 2010 to 2013. Adoption of MU capabilities was associated with 11% lower odds of an ADE (95% confidence interval [CI], 0.84-0.96). Interoperability capability was associated with 19% lower odds of an ADE (95% CI, 0.67- 0.98). Adoption of MU capabilities explained 22% of the observed reduction in ADEs, or 67,000 fewer ADEs averted by MU. Concurrent with the rapid uptake of MU and interoperability, occurrence of in-hospital ADEs declined significantly from 2010 to 2013. MU capabilities and interoperability were associated with lower occurrence of ADEs, but the effects did not vary by experience with MU. About one-fifth of the decline in ADEs from 2010 to 2013 was attributable to MU capabilities. Findings support the contention that adoption of MU capabilities and interoperability spurred by the Health Information Technology for Economic and Clinical Health Act contributed in part to the recent decline in ADEs. Published by Oxford University Press on behalf of the American Medical Informatics Association 2017. This work is written by US Government employees and is in the public domain in the United States.
Deilk?s, Ellen Tveter; Risberg, Madeleine Borgstedt; Haugen, Marion; Lindstr?m, Jonas Christoffer; Nyl?n, Urban; Rutberg, Hans; Michael, Soop
Objectives: In this paper, we explore similarities and differences in hospital adverse event (AE) rates between Norway and Sweden by reviewing medical records with the Global Trigger Tool (GTT). Design: All acute care hospitals in both countries performed medical record reviews, except one in Norway. Records were randomly selected from all eligible admissions in 2013. Eligible admissions were patients 18 years of age or older, undergoing care with an in-hospital stay of at least 24 hours, exc...
correlating with slower CYP2D6 metabolism. Our study showed that the adverse reactions to β-blockers could be predicted by the length of hospitalization, CYP2D6 poor metabolizer phenotype, and the concomitant use of other CYP2D6-metabolizing drugs. Therefore, in hospitalized patients with polypharmacy CYP2D6 genotyping might be useful in detecting those at risk of ADRs. Keywords: adverse drug reactions, β-blockers, CYP2D6, pharmacogenetics
Daskalou, Efstratia; Galli-Tsinopoulou, Assimina; Karagiozoglou-Lampoudi, Thomais; Augoustides-Savvopoulou, Persefone
Malnutrition is a frequent finding in pediatric health care settings in the form of undernutrition or excess body weight. Its increasing prevalence and impact on overall health status, which is reflected in the adverse outcomes, renders imperative the application of commonly accepted and evidence-based practices and tools by health care providers. Nutrition risk screening on admission and nutrition status evaluation are key points during clinical management of hospitalized pediatric patients, in order to prevent health deterioration that can lead to serious complications and growth consequences. In addition, anthropometric data based on commonly accepted universal growth standards can give accurate results for nutrition status. Both nutrition risk screening and nutrition status assessment are techniques that should be routinely implemented, based on commonly accepted growth standards and methodology, and linked to clinical outcomes. The aim of the present review was to address the issue of hospital malnutrition in pediatric settings in terms of prevalence, outline nutrition status evaluation and nutrition screening process using different criteria and available tools, and present its relationship with outcome measures. Key teaching points • Malnutrition-underweight or excess body weight-is a frequent imbalance in pediatric settings that affects physical growth and results in undesirable clinical outcomes. • Anthropometry interpretation through growth charts and nutrition screening are cornerstones for the assessment of malnutrition.To date no commonly accepted anthropometric criteria or nutrition screening tools are used in hospitalized pediatric patients. • Commonly accepted nutrition status and screening processes based on the World Health Organization's growth standards can contribute to the overall hospital nutrition care of pediatric patients.
O'Connor, Marie N
adverse drug reactions (ADRs) are a major cause of morbidity and healthcare utilisation in older people. The GerontoNet ADR risk score aims to identify older people at risk of ADRs during hospitalisation. We aimed to assess the clinical applicability of this score and identify other variables that predict ADRs in hospitalised older people.
Full Text Available Together, potentially inappropriate prescribing of medications (PIP and appropriate prescribing omission (APO constitute a problem that requires multiple interventions to reduce its size and the occurrence of adverse drug events (ADE. This study aims to assess PIP, APO, ADE before and after the intervention of a clinical pharmacist over medical prescriptions for elderly hospitalized patients. In a before-after study, a total of 16 542 prescriptions for 1262 patients were analyzed applying the criteria defined in both STOPP- START (screening tool of older people's prescriptions and screening tool to alert to right treatment. The intervention consisted in lectures and publications on STOPP-START criteria made available to all the areas of the hospital and suggestions made by the clinical pharmacist to the physician on each individual prescription. Before intervention, PIM was 48.9% on admission and 46.1% at discharge, while after the intervention it was 47.4% on admission and 16.7% at discharge. APO was 10% on admission and 7.6% at discharge, while after intervention it was 12.2% on admission and 7.9% at discharge. ADE were 50.9% before and 34.4% after intervention. The frequency of return to emergency was 12.2% and 4.7% before and after intervention. PIM, EAM, conciliation error, clinically serious drug interaction, and delirium were reduced to statistically significant levels. In line with various international studies, the intervention showed to attain positive results.
Ouadghiri, S; Brick, C; Benseffaj, N; Atouf, O; Essakalli, M
The declaration of the recipients adverse reactions (RAR) is one of the field haemovigilance activities. It provides an evaluation of transfusion side effects and thus prevents their appearance. The aim of this study is to analyze, over 14 years, the RAR supports reported in Rabat Ibn Sina hospital. All of the RAR supports sending to the blood transfusion service were analyzed. The data collected from these supports are: clinical characteristics of the patient, type of incident observed and type of labile blood products (LBP) transfused. A total of 353 RAR were declared with a mean cumulative incidence of 1.7/1000 LBP delivered. Febrile non-hemolytic transfusion reactions represent 72.8% of the RAR declared. The RAR were classified as grade 1 in 87.1% of cases and were secondary to a transfusion of the red cell concentrates in 81.9%. ABO incompatibility was found in four cases (0.02/1000 LBP delivered). The number of RAR reported by Rabat Ibn Sina hospital remains underestimated. Management and traceability RAR and rigorous investigation, under the responsibility of the corresponding haemovigilance contribute to the improvement of transfusion safety. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Green, Christopher F; Mottram, David R; Rowe, Philip H; Pirmohamed, Munir
Aims To investigate the attitudes of UK hospital pharmacists towards, and their understanding, of adverse drug reaction (ADR) reporting. Methods A postal questionnaire survey of 600 randomly selected hospital pharmacists was conducted. Results The response rate was 53.7% (n = 322). A total of 217 Yellow Cards had been submitted to the CSM/MCA by 78 (25.6%) of those responding. Half of those responding felt that ADR reporting should be compulsory and over three-quarters felt it was a professional obligation. However, almost half were unclear as to what should be reported, while the time available in clinical practice and time taken to complete forms were deemed to be major deterrents to reporting. Pharmacists were not dissuaded from reporting by the need to consult a medical colleague or by the absence of a fee. Education and training had a significant influence on pharmacists' participation in the Yellow Card Scheme. Conclusions Pharmacists have a reasonable knowledge and are supportive of the Yellow Card spontaneous ADR reporting scheme. However, education and training will be important in maintaining and increasing ADR reports from pharmacists. PMID:11167664
In Young Jung
Full Text Available Background. Adverse drug reactions (ADRs are any unwanted/uncomfortable effects from medication resulting in physical, mental, and functional injuries. Antibiotics account for up to 40.9% of ADRs and are associated with several serious outcomes. However, few reports on ADRs have evaluated only antimicrobial agents. In this study, we investigated antibiotic-related ADRs at a tertiary care hospital in South Korea. Methods. This is a retrospective cohort study that evaluated ADRs to antibiotics that were reported at a 2400-bed tertiary care hospital in 2015. ADRs reported by physicians, pharmacists, and nurses were reviewed. Clinical information reported ADRs, type of antibiotic, causality assessment, and complications were evaluated. Results. 1,277 (62.8% patients were considered antibiotic-related ADRs based on the World Health Organization-Uppsala Monitoring Center criteria (certain, 2.2%; probable, 35.7%; and possible, 62.1%. Totally, 44 (3.4% patients experienced serious ADRs. Penicillin and quinolones were the most common drugs reported to induce ADRs (both 16.0%, followed by third-generation cephalosporins (14.9%. The most frequently experienced side effects were skin manifestations (45.1% followed by gastrointestinal disorders (32.6%. Conclusion. Penicillin and quinolones are the most common causative antibiotics for ADRs and skin manifestations were the most frequently experienced symptom.
Opotowsky, Alexander R; Landzberg, Michael J; Kimmel, Stephen E; Webb, Gary D
Percutaneous closure of patent foramen ovale/atrial septal defect (PFO/ASD) is an increasingly common procedure perceived as having minimal risk. There are no population-based estimates of in-hospital adverse event rates of percutaneous PFO/ASD closure. We used nationally representative data from the 2001-2005 Nationwide Inpatient Sample to identify patients >or-=20 years old admitted to an acute care hospital with an International Classification of Diseases, Ninth Revision code designating percutaneous PFO/ASD closure on the first or second hospital day. Variables analyzed included age, sex, number of comorbidities, year, same-day use of intracardiac or other echocardiography, same-day left heart catheterization, hospital size and teaching status, PFO/ASD procedural volume, and coronary intervention volume. Outcomes of interest included length of stay, charges, and adverse events. The study included 2,555 (weighted to United States population: 12,544 +/- 1,987) PFO/ASD closure procedures. Mean age was 52.0 +/- 0.4 years, and 57.3% +/- 1.0% were women. Annual hospital volume averaged 40.8 +/- 7.7 procedures (range, 1-114). Overall, 8.2 +/- 0.8% of admissions involved an adverse event. Older patients and those with comorbidities were more likely to sustain adverse events. Use of intracardiac echocardiography was associated with fewer adverse events. The risk of adverse events was inversely proportional to annual hospital volume (odds ratio [OR] 0.91, 95% confidence interval [CI] 0.86-0.96, per 10 procedures), even after limiting the analysis to hospitals performing >or=10 procedures annually (OR 0.91, 95% CI 0.85-0.98). Adverse events were more frequent at hospitals in the lowest volume quintile as compared with the highest volume quintile (13.3% vs 5.4%, OR 2.42, 95% CI 1.55-3.78). The risk of adverse events of percutaneous PFO/ASD closure is inversely correlated with hospital volume. This relationship applies even to hospitals meeting the current guidelines
Full Text Available Objective: The objective of this study was to analyze the various aspects of serious adverse drug reactions (serious ADRs such as clinical presentation, causality, severity, and preventability occurring in a hospital setting. Materials and Methods: All serious ADRs reported from January 2010 to May 2015 at ADR Monitoring Centre, Department of Pharmacology, B. J. Medical College and Civil Hospital, Ahmedabad, were selected as per the World health Organization -Uppsala Monitoring Center (WHO-UMC criteria. A retrospective analysis was carried out for clinical presentation, causality (as per the WHO-UMC scale and Naranjo′s algorithm, severity (Hartwig and Siegel scale, and preventability (Schumock and Thornton criteria. Results: Out of 2977 ADRs reported, 375 were serious in nature. The most common clinical presentation involved was skin and appendageal disorders (71, 18.9%. The common causal drug group was antitubercular (129, 34.4% followed by antiretroviral (76, 20.3% agents. The criteria for the majority of serious ADRs were intervention to prevent permanent impairment or damage (164, 43.7% followed by hospitalization (158, 42.1%. Majority of the serious ADRs were continuing (191, 50.9% at the time of reporting, few recovered (101, 26.9%, and two were fatal. The majority of serious ADRs were categorized as possible (182, 48.8% followed by probable (173, 46.1% in nature. Conclusion: Antitubercular, antiretroviral, and antimicrobial drugs were the most common causal drug groups for serious ADRs. This calls for robust ADR monitoring system and education of patients and prescribers for identification and effective management.
C Aneke John; U Ezeh Theodora; A Nwosu Gloria; E Anumba Chika
Background: The occurrence of adverse reactions to blood donation significantly hampers donor retention and negatively impacts on the universal availability of adequate numbers of blood donor units. Objective: To analyze the spectrum and prevalence of adverse reactions in blood donors in a tertiary hospital-based blood bank in Nigeria. Subjects and Methods: The details of 3520 blood donors who presented for donation over a 12 months period were retrieved from the departmental archives for ana...
Rouquié, David; Heneweer, Marjoke; Botham, Jane; Ketelslegers, Hans; Markell, Lauren; Pfister, Thomas; Steiling, Winfried; Strauss, Volker; Hennes, Christa
Identification of the potential hazards of chemicals has traditionally relied on studies in laboratory animals where changes in clinical pathology and histopathology compared to untreated controls defined an adverse effect. In the past decades, increased consistency in the definition of adversity with chemically-induced effects in laboratory animals, as well as in the assessment of human relevance has been reached. More recently, a paradigm shift in toxicity testing has been proposed, mainly driven by concerns over animal welfare but also thanks to the development of new methods. Currently, in vitro approaches, toxicogenomic technologies and computational tools, are available to provide mechanistic insight in toxicological Mode of Action (MOA) of the adverse effects observed in laboratory animals. The vision described as Tox21c (Toxicity Testing in the 21st century) aims at predicting in vivo toxicity using a bottom-up-approach, starting with understanding of MOA based on in vitro data to ultimately predict adverse effects in humans. At present, a practical application of the Tox21c vision is still far away. While moving towards toxicity prediction based on in vitro data, a stepwise reduction of in vivo testing is foreseen by combining in vitro with in vivo tests. Furthermore, newly developed methods will also be increasingly applied, in conjunction with established methods in order to gain trust in these new methods. This confidence is based on a critical scientific prerequisite: the establishment of a causal link between data obtained with new technologies and adverse effects manifested in repeated-dose in vivo toxicity studies. It is proposed to apply the principles described in the WHO/IPCS framework of MOA to obtain this link. Finally, an international database of known MOAs obtained in laboratory animals using data-rich chemicals will facilitate regulatory acceptance and could further help in the validation of the toxicity pathway and adverse outcome pathway
Full Text Available Aim To monitor the adverse drug reactions (ADRs caused by antihypertensive medicines prescribed in a university teaching hospital.Methods:he present work was an open, non-comparative, observational study conducted on hypertensive patients attending the Medicine OPD of Majeedia Hospital, Jamia Hamdard, New Delhi, India by conducting patient interviews and recording the data on ADR monitoring form as recommended by Central Drugs Standard Control Organization (CDSCO, Government of India.Results:A total of 21 adverse drug reactions were observed in 192 hypertensive patients. Incidence of adverse drug reactions was found to be higher in patients more than 40 years in age, and females experienced more ADRs (n = 14, 7.29 % than males, 7 (3.64 %. Combination therapy was associated with more number of adverse drug reactions (66.7 % as against monotherapy (33.3 %. Calcium channel blockers were found to be the most frequently associated drugs with adverse drug reactions (n = 7, followed by diuretics (n = 5, and beta- blockers (n = 4. Among individual drugs, amlodipine was found to be the commonest drug associated with adverse drug reactions (n = 7, followed by torasemide (n = 3. Adverse drug reactions associated with central nervous system were found to be the most frequent (42.8 % followed by musculo-skeletal complaints (23.8 % and gastro-intestinal disorders (14.3 %. Conclusions:The present pharmacovigilance study represents the adverse drug reaction profile of the antihypertensive medicines prescribed in our university teaching hospital. The above findings would be useful for physicians in rational prescribing. Calcium channel blockers were found to be the most frequently associated drugs with adverse drug reactions.
Liu, Ruifeng; AbdulHameed, Mohamed Diwan M; Kumar, Kamal; Yu, Xueping; Wallqvist, Anders; Reifman, Jaques
The expanded use of multiple drugs has increased the occurrence of adverse drug reactions (ADRs) induced by drug-drug interactions (DDIs). However, such reactions are typically not observed in clinical drug-development studies because most of them focus on single-drug therapies. ADR reporting systems collect information on adverse health effects caused by both single drugs and DDIs. A major challenge is to unambiguously identify the effects caused by DDIs and to attribute them to specific drug interactions. A computational method that provides prospective predictions of potential DDI-induced ADRs will help to identify and mitigate these adverse health effects. We hypothesize that drug-protein interactions can be used as independent variables in predicting ADRs. We constructed drug pair-protein interaction profiles for ~800 drugs using drug-protein interaction information in the public domain. We then constructed statistical models to score drug pairs for their potential to induce ADRs based on drug pair-protein interaction profiles. We used extensive clinical database information to construct categorical prediction models for drug pairs that are likely to induce ADRs via synergistic DDIs and showed that model performance deteriorated only slightly, with a moderate amount of false positives and false negatives in the training samples, as evaluated by our cross-validation analysis. The cross validation calculations showed an average prediction accuracy of 89% across 1,096 ADR models that captured the deleterious effects of synergistic DDIs. Because the models rely on drug-protein interactions, we made predictions for pairwise combinations of 764 drugs that are currently on the market and for which drug-protein interaction information is available. These predictions are publicly accessible at http://avoid-db.bhsai.org . We used the predictive models to analyze broader aspects of DDI-induced ADRs, showing that ~10% of all combinations have the potential to induce ADRs
... management in surgery. Knowing which patient to operate and those at high risk of developing complications contributes significantly to the quality of surgical care and cost reduction. The postoperative complications of patients who underwent Laparotomy in Mulago Hospital were studied using POSSUM scoring system.
Deshpande, Rushikesh Prabhakar; Motghare, Vijay Motiram; Padwal, Sudhir Laxman; Pore, Rakesh Ramkrishna; Bhamare, Chetanraj Ghanshyam; Deshmukh, Vinod Shivaji; Pise, Harshal Nutan
Objectives The study was carried out with the aim of evaluation of the adverse drug reaction profile of anti-snake venom serum (ASV) in a rural tertiary care hospital. Methods An observational study was conducted in SRTR Medical College, Ambajogai, Maharashtra, India. A total number of 296 indoor case papers of snake bite from February to September 2011 and June to August 2012 were retrieved from the record section and the antivenom reactions were assessed. In addition, basic epidemiological data and prescribing practices of ASV were also analyzed. Results Vasculotoxic snake bites were more common (50.61%) than neuroparalytic ones (22.56%). Mild envenomation was the commonest presentation. A total of 92 (56.10%) patients who received ASV suffered from antivenom reactions. The most common nature of reaction was chills, rigors (69.56%) followed by nausea and vomiting (34.8%). 10-15% patients suffered from moderate to severe reactions like hypotension and sudden respiratory arrest. We did not find any dose response relationship of ASV to risk of reactions (odds ratio 0.37). Intradermal sensitivity test was performed in about 72% cases. Conclusion Our study showed a higher incidence of reactions to ASV at our institute. PMID:24396245
Ravelli, A. C. J.; Jager, K. J.; de Groot, M. H.; Erwich, J. J. H. M.; Rijninks-van Driel, G. C.; Tromp, M.; Eskes, M.; Abu-Hanna, A.; Mol, B. W. J.
Objective To study the effect of travel time, at the start or during labour, from home to hospital on mortality and adverse outcomes in pregnant women at term in primary and secondary care. Design Population-based cohort study from 2000 up to and including 2006. Setting The Netherlands Perinatal
Tribiño, Gabriel; Maldonado, Carlos; Segura, Omar; Díaz, Jorge
Adverse drug reactions (ADRs) occur frequently in hospitals and increase costs of health care; however, few studies have quantified the clinical and economic impact of ADRs in Colombia. These impacts were evaluated by calculating costs associated with ADRs in patients hospitalized in the internal medicine ward of a Level 3 hospital located in Bogotá, Colombia. In addition, salient clinical features of ADRs were identified and characterized. Intensive follow-ups for a cohort of patients were conducted for a five month period in order to detect ADRs; different ways to classify them, according to literature, were considered as well. Information was collected using the INVIMA reporting format, and causal probability was evaluated with the Naranjo algorithm. Direct costs were calculated from the perspective of payer, based on the following costs: additional hospital stay, medications, paraclinical tests, additional procedures, patient displacement to intermediate or intensive care units, and other costs. Of 836 patients admitted to the service, 268 adverse drug reactions were detected in 208 patients (incidence proportion 25.1%, occurence rate 0.32). About the ADRs found, 74.3% were classified as probable, 92.5% were type A, and 81.3% were moderate. The body system most often affected was the circulatory system (33.9%). Drugs acting on the blood were most frequently those ones associated with adverse reactions (37.6%). The costs resulting from medical care of adverse drug reactions varied from COL dollar 93,633,422 (USD dollar 35,014.92) to COL dollar 122,155,406 (USD dollar 45,680.94), according to insurance type, during the study period. Adverse drug reactions have a significant negative health and financial impact on patient welfare. Because of the substantial resources required for their medical care and the significant proportion of preventable adverse reactions, active programs of institutional pharmacovigilance are highly recommended.
Full Text Available To determine whether adverse events extend the duration of hospitalization, and to evaluate the effectiveness of medical intervention in ameliorating adverse events and reducing the prolonged hospital stay associated with adverse events.A single arm intervention study was conducted from October 2012 to March 2014 in the otolaryngology ward of a 614-bed, university-affiliated hospital. Adverse events were monitored daily by physicians, pharmacists and nurses, and recorded in the electronic medical chart for each patient. Appropriate drug management of adverse events was performed by physicians in liaison with pharmacists. The Kaplan-Meier method was used to assess the length of hospitalization of patients who underwent medical intervention for adverse events.Of 571 patients admitted to the otolaryngology ward in a year, 219 patients (38.4% experienced adverse events of grade ≥2. The duration of hospitalization was affected by the grade of adverse events, with a mean duration of hospital stay of 9.2, 17.2, 28.3 and 47.0 days for grades 0, 1, 2, and 3-4, respectively. Medical intervention lowered the incidence of grade ≥2 adverse events to 14.5%. The length of hospitalization was significantly shorter in patients who showed an improvement of adverse events after medical intervention than those who did not (26.4 days vs. 41.6 days, hazard ratio 1.687, 95% confidence interval: 1.260-2.259, P<0.001. A multivariate Cox proportional hazard analysis indicated that insomnia, constipation, nausea/vomiting, infection, non-cancer pain, oral mucositis, odynophagia and neutropenia were significant risk factors for prolongation of hospital stay.Patients who experienced adverse events are at high risk of prolonged hospitalization. Medical intervention for adverse events was found to be effective in reducing the length of hospital stay associated with adverse events.
Full Text Available A Hughes,1 L Davies,1 R Hale,1 JE Gallagher21Kings College Hospital NHS Foundation Trust, 2King's College London Dental Institute, London, United KingdomBackground: The safety and protection of patients and health care workers is of paramount importance in dentistry, and this includes students in training who provide clinical care. Given the nature of dental care, adverse incidents can and do occur, exposing health care workers to body fluids and putting them at risk of infection, including contracting a blood-borne virus. The aim of this research was to analyze trends in the volume, rate, nature, management, and outcome of adverse incidents reported at one dental teaching hospital from 2005 to 2010.Methods: Descriptive analysis of trends in the volume, rate, nature, management, and outcome of adverse incidents reported at one dental teaching hospital over a six-year period was undertaken in relation to the level of outpatient and day surgery activity.Results: In total, 287 incidents were reported over a six-year period, which amounted to 0.039% of outpatient or day surgery appointments. Nearly three quarters of all the incidents (n = 208, 72% took place during treatment or whilst clearing away after the appointment. The most frequent incidents were associated with administration of local anesthetic (n = 63, 22%, followed by burs used in dental hand pieces (n = 51, 18%.Conclusion: This research confirms that adverse incidents are a feature of dental hospitals and reports the common sources. The importance of accurate and consistent reporting of data to ensure that these issues are monitored to inform action and reduce risks to staff, students, and patients are highlighted.Keywords: risk management, blood-borne virus, dental hospital, body fluids exposure, adverse event reporting
Bellandi, Tommaso; Tartaglia, Riccardo; Forni, Silvia; D'Arienzo, Sara; Tulli, Giorgio
Adverse events (AEs) are a major concern in surgery, but the evidence in cardiac surgery is limited, especially on the contributory factors. According to the data of the National Outcomes Program, a unit was selected to conduct a mixed methods investigation into the incidence, type, and cause of AE, given its mortality rate that was double the national average on coronary artery bypass grafting, valve reparation, and replacement. A retrospective investigation on the performance of a cardiac surgery, combining the routinely collected data on process and outcome measures with a 2-stage structured review of 280 medical records performed by 3 expert clinicians, with the support of a methodologist. At least one risk had been verified in 137 of 280 cases (48.9%, 95% CI, 43.1-54.8). The total number of AE was 42, with an incidence of 15% (95% CI, 10.8-20.2) and a preventability of 80.9% (95% CI, 69.1-92.8). In 11.9% of AE, the consequence is death, disability in 40.5%, and extended hospital stay in 69% of the cases. Adverse events are associated with problems in care management at the ward (89/137, 64.9%, 95% CI, 56.9-72.9), followed by surgical complications (46/137, 33.6%, 95% CI, 25.7-41.5) and infection/sepsis (32/137, 23.4%, 95% CI, 16.3-30.4). An active error was made by the health care workers in 31 of 42 cases with AE, either during the decision making or during the execution of an action. A total of 36 AEs were due to deficiencies attributed to organizational factors and 31 were linked to poor teamwork. The mixed methods approach demonstrated how a deep understanding of AE and poor performance may emerge thanks to the combination of routinely available data and experts' evaluations. The main limitation of this study is its focus on the cardiac surgery rather than on the entire process of care. The evaluation could have been integrated with on-site observations and the analysis of reported incidents. © 2017 John Wiley & Sons, Ltd.
Holmes, George M; Kaufman, Brystana G; Pink, George H
Annual rates of rural hospital closure have been increasing since 2010, and hospitals that close have poor financial performance relative to those that remain open. This study develops and validates a latent index of financial distress to forecast the probability of financial distress and closure within 2 years for rural hospitals. Hospital and community characteristics are used to predict the risk of financial distress 2 years in the future. Financial and community data were drawn for 2,466 rural hospitals from 2000 through 2013. We tested and validated a model predicting a latent index of financial distress (FDI), measured by unprofitability, equity decline, insolvency, and closure. Using the predicted FDI score, hospitals are assigned to high, medium-high, medium-low, and low risk of financial distress for use by practitioners. The FDI forecasts 8.01% of rural hospitals to be at high risk of financial distress in 2015, 16.3% as mid-high, 46.8% as mid-low, and 28.9% as low risk. The rate of closure for hospitals in the high-risk category is 4 times the rate in the mid-high category and 28 times that in the mid-low category. The ability of the FDI to discriminate hospitals experiencing financial distress is supported by a c-statistic of .74 in a validation sample. This methodology offers improved specificity and predictive power relative to existing measures of financial distress applied to rural hospitals. This risk assessment tool may inform programs at the federal, state, and local levels that provide funding or support to rural hospitals. © 2016 National Rural Health Association.
C Aneke John
Full Text Available Background: The occurrence of adverse reactions to blood donation significantly hampers donor retention and negatively impacts on the universal availability of adequate numbers of blood donor units. Objective: To analyze the spectrum and prevalence of adverse reactions in blood donors in a tertiary hospital-based blood bank in Nigeria. Subjects and Methods: The details of 3520 blood donors who presented for donation over a 12 months period were retrieved from the departmental archives for analysis. These included sociodemographic information, type of donor, type and frequency of adverse reactions to blood donation. Data were analyzed using the Statistical Package for Social Sciences version 20.0 (SPSS Inc., Chicago, IL, USA computer software. Descriptive and inferential statistics were employed to represent the distribution of donor characteristics (as percentages and compare reaction rates by gender and severity, respectively. Results: The prevalence of adverse reactions to blood donation was (56/3520 1.60%; this occurred more frequently in male and family replacement donors (55.35% and 100.0%, respectively. The spectrum of donor adverse reactions included anxiety 25 (44.64%, generalized body weakness 11 (19.64%, hematoma 10 (17.86%, fainting 5 (8.93%, and vomiting 5 (8.93%. Vasovagal reactions were the most frequent adverse reaction encountered among the donors (46/56; 82.14%. Conclusion: Vasovagal reactions are common adverse phenomena in our blood donor set; this has implications on transfusion safety and blood donor retention.
Gardiner, L R; Oswald, S L; Jahera, J S
This study investigates the ability of discriminant analysis to provide accurate predictions of hospital failure. Using data from the period following the introduction of the Prospective Payment System, we developed discriminant functions for each of two hospital ownership categories: not-for-profit and proprietary. The resulting discriminant models contain six and seven variables, respectively. For each ownership category, the variables represent four major aspects of financial health (liquidity, leverage, profitability, and efficiency) plus county marketshare and length of stay. The proportion of closed hospitals misclassified as open one year before closure does not exceed 0.05 for either ownership type. Our results show that discriminant functions based on a small set of financial and nonfinancial variables provide the capability to predict hospital failure reliably for both not-for-profit and proprietary hospitals.
Zhao, Jing; Henriksson, Aron; Asker, Lars; Boström, Henrik
The digitization of healthcare data, resulting from the increasingly widespread adoption of electronic health records, has greatly facilitated its analysis by computational methods and thereby enabled large-scale secondary use thereof. This can be exploited to support public health activities such as pharmacovigilance, wherein the safety of drugs is monitored to inform regulatory decisions about sustained use. To that end, electronic health records have emerged as a potentially valuable data source, providing access to longitudinal observations of patient treatment and drug use. A nascent line of research concerns predictive modeling of healthcare data for the automatic detection of adverse drug events, which presents its own set of challenges: it is not yet clear how to represent the heterogeneous data types in a manner conducive to learning high-performing machine learning models. Datasets from an electronic health record database are used for learning predictive models with the purpose of detecting adverse drug events. The use and representation of two data types, as well as their combination, are studied: clinical codes, describing prescribed drugs and assigned diagnoses, and measurements. Feature selection is conducted on the various types of data to reduce dimensionality and sparsity, while allowing for an in-depth feature analysis of the usefulness of each data type and representation. Within each data type, combining multiple representations yields better predictive performance compared to using any single representation. The use of clinical codes for adverse drug event detection significantly outperforms the use of measurements; however, there is no significant difference over datasets between using only clinical codes and their combination with measurements. For certain adverse drug events, the combination does, however, outperform using only clinical codes. Feature selection leads to increased predictive performance for both data types, in isolation and
McFarland, Daniel C.; Ornstein, Katherine; Holcombe, Randall F.
Background Hospital Value-Based Purchasing (HVBP) incentivizes quality performance based healthcare by linking payments directly to patient satisfaction scores obtained from Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. Lower HCAHPS scores appear to cluster in heterogeneous population dense areas and could bias CMS reimbursement. Objective Assess nonrandom variation in patient satisfaction as determined by HCAHPS. Design Multivariate regression modeling was performed for individual dimensions of HCAHPS and aggregate scores. Standardized partial regression coefficients assessed strengths of predictors. Weighted Individual (hospital) Patient Satisfaction Adjusted Score (WIPSAS) utilized four highly predictive variables and hospitals were re-ranked accordingly. Setting 3,907 HVBP-participating hospitals. Patients 934,800 patient surveys, by most conservative estimate. Measurements 3,144 county demographics (U.S. Census), and HCAHPS. Results Hospital size and primary language (‘non-English speaking’) most strongly predicted unfavorable HCAHPS scores while education and white ethnicity most strongly predicted favorable HCAHPS scores. The average adjusted patient satisfaction scores calculated by WIPSAS approximated the national average of HCAHPS scores. However, WIPSAS changed hospital rankings by variable amounts depending on the strength of the predictive variables in the hospitals’ locations. Structural and demographic characteristics that predict lower scores were accounted for by WIPSAS that also improved rankings of many safety-net hospitals and academic medical centers in diverse areas. Conclusions Demographic and structural factors (e.g., hospital beds) predict patient satisfaction scores even after CMS adjustments. CMS should consider WIPSAS or a similar adjustment to account for the severity of patient satisfaction inequities that hospitals could strive to correct. PMID:25940305
Huang, Liang-Chin; Wu, Xiaogang; Chen, Jake Y
The prediction of adverse drug reactions (ADRs) has become increasingly important, due to the rising concern on serious ADRs that can cause drugs to fail to reach or stay in the market. We proposed a framework for predicting ADR profiles by integrating protein-protein interaction (PPI) networks with drug structures. We compared ADR prediction performances over 18 ADR categories through four feature groups-only drug targets, drug targets with PPI networks, drug structures, and drug targets with PPI networks plus drug structures. The results showed that the integration of PPI networks and drug structures can significantly improve the ADR prediction performance. The median AUC values for the four groups were 0.59, 0.61, 0.65, and 0.70. We used the protein features in the best two models, "Cardiac disorders" (median-AUC: 0.82) and "Psychiatric disorders" (median-AUC: 0.76), to build ADR-specific PPI networks with literature supports. For validation, we examined 30 drugs withdrawn from the U.S. market to see if our approach can predict their ADR profiles and explain why they were withdrawn. Except for three drugs having ADRs in the categories we did not predict, 25 out of 27 withdrawn drugs (92.6%) having severe ADRs were successfully predicted by our approach. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rutten, Bert; Roest, Mark; McClellan, Elizabeth A; Sels, Jan W; Stubbs, Andrew; Jukema, J Wouter; Doevendans, Pieter A; Waltenberger, Johannes; van Zonneveld, Anton-Jan; Pasterkamp, Gerard; De Groot, Philip G; Hoefer, Imo E
Monocyte recruitment to damaged endothelium is enhanced by platelet binding to monocytes and contributes to vascular repair. Therefore, we studied whether the number of platelets per monocyte affects the recurrence of adverse events in patients after percutaneous coronary intervention (PCI). Platelet-monocytes complexes with high and low median fluorescence intensities (MFI) of the platelet marker CD42b were isolated using cell sorting. Microscopic analysis revealed that a high platelet marker MFI on monocytes corresponded with a high platelet density per monocyte while a low platelet marker MFI corresponded with a low platelet density per monocyte (3.4 ± 0.7 vs 1.4 ± 0.1 platelets per monocyte, P=0.01). Using real-time video microscopy, we observed increased recruitment of high platelet density monocytes to endothelial cells as compared with low platelet density monocytes (P=0.01). Next, we classified PCI scheduled patients (N=263) into groups with high, medium and low platelet densities per monocyte and assessed the recurrence of adverse events. After multivariate adjustment for potential confounders, we observed a 2.5-fold reduction in the recurrence of adverse events in patients with a high platelet density per monocyte as compared with a low platelet density per monocyte [hazard ratio=0.4 (95% confidence interval, 0.2-0.8), P=0.01]. We show that a high platelet density per monocyte increases monocyte recruitment to endothelial cells and predicts a reduction in the recurrence of adverse events in patients after PCI. These findings may imply that a high platelet density per monocyte protects against recurrence of adverse events.
Barbara A Jennings
Full Text Available The potential clinical utility of genetic markers associated with response to fluoropyrimidine treatment in colorectal cancer patients remains controversial despite extensive study. Our aim was to test the clinical validity of both novel and previously identified markers of adverse events in a broad clinical setting. We have conducted an observational pharmacogenetic study of early adverse events in a cohort study of 254 colorectal cancer patients treated with 5-fluorouracil or capecitabine. Sixteen variants of nine key folate (pharmacodynamic and drug metabolising (pharmacokinetic enzymes have been analysed as individual markers and/or signatures of markers. We found a significant association between TYMP S471L (rs11479 and early dose modifications and/or severe adverse events (adjusted OR = 2.02 [1.03; 4.00], p = 0.042, adjusted OR = 2.70 [1.23; 5.92], p = 0.01 respectively. There was also a significant association between these phenotypes and a signature of DPYD mutations (Adjusted OR = 3.96 [1.17; 13.33], p = 0.03, adjusted OR = 6.76 [1.99; 22.96], p = 0.002 respectively. We did not identify any significant associations between the individual candidate pharmacodynamic markers and toxicity. If a predictive test for early adverse events analysed the TYMP and DPYD variants as a signature, the sensitivity would be 45.5 %, with a positive predictive value of just 33.9 % and thus poor clinical validity. Most studies to date have been under-powered to consider multiple pharmacokinetic and pharmacodynamic variants simultaneously but this and similar individualised data sets could be pooled in meta-analyses to resolve uncertainties about the potential clinical utility of these markers.
Ratan J. Lihite
Jun 27, 2016 ... Patients of all age and either sex were included. Adverse drug ... adverse drug reactions in majority of the patients. The commonly .... ten prescription drugs were excluded. .... Pneumonia with respiratory distress, Vision problem, Knee pain, .... back of spontaneous reporting system i.e. underreporting. Thus ...
Full Text Available Early and accurate identification of adverse drug reactions (ADRs is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs.In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper.Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.
Jones, Michael J; Neal, Christopher P; Ngu, Wee Sing; Dennison, Ashley R; Garcea, Giuseppe
The aim of this study was to compare the prognostic value of established scoring systems with early warning scores in a large cohort of patients with acute pancreatitis. In patients presenting with acute pancreatitis, age, sex, American Society of Anaesthesiologists (ASA) grade, Modified Glasgow Score, Ranson criteria, APACHE II scores and early warning score (EWS) were recorded for the first 72 h following admission. These variables were compared between survivors and non-survivors, between patients with mild/moderate and severe pancreatitis (based on the 2012 Atlanta Classification) and between patients with a favourable or adverse outcome. A total of 629 patients were identified. EWS was the best predictor of adverse outcome amongst all of the assessed variables (area under curve (AUC) values 0.81, 0.84 and 0.83 for days 1, 2 and 3, respectively) and was the most accurate predictor of mortality on both days 2 and 3 (AUC values of 0.88 and 0.89, respectively). Multivariable analysis revealed that an EWS ≥2 was independently associated with severity of pancreatitis, adverse outcome and mortality. This study confirms the usefulness of EWS in predicting the outcome of acute pancreatitis. It should become the mainstay of risk stratification in patients with acute pancreatitis.
Hospital admission, especially for the elderly, can be a seminal event as many patients die within a year. This study reports the prediction of death within a year of admission to hospital of the Simple Clinical Score (SCS) and ECG dispersion mapping (ECG-DM). ECG-DM is a novel technique that analyzes low-amplitude ECG oscillations and reports them as the myocardial micro-alternation index (MMI).
Uematsu, Hironori; Yamashita, Kazuto; Kunisawa, Susumu; Otsubo, Tetsuya; Imanaka, Yuichi
Community-acquired pneumonia is a common cause of hospitalization, and pneumococcal vaccinations are recommended for high-risk individuals. Although risk factors for pneumonia have been identified, there are currently no pneumonia hospitalization prediction models based on the risk profiles of healthy subjects. This study aimed to develop a predictive model for pneumonia hospitalization in adults to accurately identify high-risk individuals to facilitate the efficient prevention of pneumonia. We conducted a retrospective database analysis using health checkup data and health insurance claims data for residents of Kyoto prefecture, Japan, between April 2010 and March 2015. We chose adults who had undergone health checkups in the first year of the study period, and tracked pneumonia hospitalizations over the next 5 years. Subjects were randomly divided into training and test sets. The outcome measure was pneumonia hospitalization, and candidate predictors were obtained from the health checkup data. The prediction model was developed and internally validated using a LASSO logistic regression analysis. Lastly, we compared the new model with comparative models. The study sample comprised 54,907 people who had undergone health checkups. Among these, 921 were hospitalized for pneumonia during the study period. The c-statistic for the prediction model in the test set was 0.71 (95% confidence interval: 0.69-0.73). In contrast, a comparative model with only age and comorbidities as predictors had a lower c-statistic of 0.55 (95% confidence interval: 0.54-0.56). Our predictive model for pneumonia hospitalization performed better than comparative models, and may be useful for supporting the development of pneumonia prevention measures.
Hornik, Christoph P; Herring, Amy H; Benjamin, Daniel K; Capparelli, Edmund V; Kearns, Gregory L; van den Anker, John; Cohen-Wolkowiez, Michael; Clark, Reese H; Smith, P Brian
Carbapenems are commonly used in hospitalized infants despite a lack of complete safety data and associations with seizures in older children. We compared the incidence of adverse events in hospitalized infants receiving meropenem versus imipenem/cilastatin. We conducted a retrospective cohort study of 5566 infants treated with meropenem or imipenem/cilastatin in neonatal intensive care units managed by the Pediatrix Medical Group between 1997 and 2010. Multivariable conditional logistic regression was performed to evaluate the association between carbapenem therapy and adverse events, controlling for infant factors and severity of illness. Adverse events were more common with use of meropenem compared with imipenem/cilastatin (62.8/1000 infant days versus 40.7/1000 infant days, P imipenem/cilastatin (adjusted odds ratio 0.96; 95% confidence interval: 0.68, 1.32). The incidence of death, as well as the combined outcome of death or seizure, was lower with meropenem use-odds ratio 0.68 (0.50, 0.88) and odds ratio 0.77 (0.62, 0.95), respectively. In this cohort of infants, meropenem was associated with more frequent but less severe adverse events when compared with imipenem/cilastatin.
Full Text Available Abstract Objective In-hospital mortality is an important performance measure for quality improvement, although it requires proper risk adjustment. We set out to develop in-hospital mortality prediction models for acute hospitalization using a nation-wide electronic administrative record system in Japan. Methods Administrative records of 224,207 patients (patients discharged from 82 hospitals in Japan between July 1, 2002 and October 31, 2002 were randomly split into preliminary (179,156 records and test (45,051 records groups. Study variables included Major Diagnostic Category, age, gender, ambulance use, admission status, length of hospital stay, comorbidity, and in-hospital mortality. ICD-10 codes were converted to calculate comorbidity scores based on Quan's methodology. Multivariate logistic regression analysis was then performed using in-hospital mortality as a dependent variable. C-indexes were calculated across risk groups in order to evaluate model performances. Results In-hospital mortality rates were 2.68% and 2.76% for the preliminary and test datasets, respectively. C-index values were 0.869 for the model that excluded length of stay and 0.841 for the model that included length of stay. Conclusion Risk models developed in this study included a set of variables easily accessible from administrative data, and still successfully exhibited a high degree of prediction accuracy. These models can be used to estimate in-hospital mortality rates of various diagnoses and procedures.
Full Text Available Jie Han,* Xiaona Wang,* Ping Ye, Ruihua Cao, Xu Yang, Wenkai Xiao, Yun Zhang, Yongyi Bai, Hongmei Wu Department of Geriatric Cardiology, Chinese PLA General Hospital, Beijing, People’s Republic of China *These authors contributed equally to this work Objectives: Despite growing evidence that arterial stiffness has important predictive value for cardiovascular disease in patients with advanced stages of chronic kidney disease, the predictive significance of arterial stiffness in individuals with mildly impaired renal function has not been established. The aim of this study was to evaluate the predictive value of arterial stiffness on cardiovascular disease in this specific population. Materials and methods: We analyzed measurements of arterial stiffness (carotid–femoral pulse-wave velocity [cf-PWV] and the incidence of major adverse cardiovascular events (MACEs in 1,499 subjects from a 4.8-year longitudinal study. Results: A multivariate Cox proportional-hazard regression analysis showed that in individuals with normal renal function (estimated glomerular filtration rate [eGFR] ≥90 mL/min/1.73 m2, the baseline cf-PWV was not associated with occurrence of MACEs (hazard ratio 1.398, 95% confidence interval 0.748–2.613; P=0.293. In individuals with mildly impaired renal function (eGFR <90 mL/min/1.73 m2, a higher baseline cf-PWV level was associated with a higher risk of MACEs (hazard ratio 2.334, 95% confidence interval 1.082–5.036; P=0.031. Conclusion: Arterial stiffness is a moderate and independent predictive factor for MACEs in individuals with mildly impaired renal function (eGFR <90 mL/min/1.73 m2. Keywords: epidemiology, arterial stiffness, impaired renal function, predictive value, MACEs
Michelle M Foisy
Full Text Available OBJECTIVE: To characterize and compare the rates of adverse drug reactions (ADRs and interactions on admission in two, one-year periods: pre-highly active antiretroviral therapy (HAART (phase 1 and post-HAART (phase 2.
Manan, Norhafizah A.; Abidin, Basir
Five percent of patients who went through Percutaneous Coronary Intervention (PCI) experienced Major Adverse Cardiac Events (MACE) after PCI procedure. Risk prediction of MACE following a PCI procedure therefore is helpful. This work describes a review of such prediction models currently in use. Literature search was done on PubMed and SCOPUS database. Thirty literatures were found but only 4 studies were chosen based on the data used, design, and outcome of the study. Particular emphasis was given and commented on the study design, population, sample size, modeling method, predictors, outcomes, discrimination and calibration of the model. All the models had acceptable discrimination ability (C-statistics >0.7) and good calibration (Hosmer-Lameshow P-value >0.05). Most common model used was multivariate logistic regression and most popular predictor was age.
Rafael San-Miguel Carrasco
Full Text Available Geriatrics Medicine constitutes a clinical research field in which data analytics, particularly predictive modeling, can deliver compelling, reliable and long-lasting benefits, as well as non-intuitive clinical insights and net new knowledge. The research work described in this paper leverages predictive modeling to uncover new insights related to adverse reaction to drugs in elderly patients. The differentiation factor that sets this research exercise apart from traditional clinical research is the fact that it was not designed by formulating a particular hypothesis to be validated. Instead, it was data-centric, with data being mined to discover relationships or correlations among variables. Regression techniques were systematically applied to data through multiple iterations and under different configurations. The obtained results after the process was completed are explained and discussed next.
Eliana Auxiliadora M. Costa
Full Text Available Os eventos adversos cirúrgicos têm especial relevância pelo impacto sobre a saúde dos pacientes e por serem preveníveis. A despeito do crescente número de publicações nessa área, persistem lacunas de conhecimento acerca desses eventos na modalidade da assistência cirúrgica ambulatorial. Esta pesquisa objetivou estimar a incidência de eventos adversos cirúrgicos em hospital dia. Trata-se de um estudo de coorte retrospectiva de 55.879 pacientes operados num hospital dia entre os anos de 2010 e 2014. A incidência de eventos adversos cirúrgicos foi de 0,51%. Destes eventos, 0,31% foram de Infecções do sítio cirúrgico e 0,19% de outros eventos adversos cirúrgicos distribuídos proporcionalmente em: deiscência da ferida cirúrgica (12,90%, hemorragia (5,20%, flebite (5,20% e trombose dos membros inferiores (4,90%. Os resultados deste estudo ratificam que a cirurgia realizada em regime ambulatorial de hospital dia está relacionada a menores incidências de eventos adversos cirúrgicos, entretanto, é indispensável um sistema de seguimento dos pacientes após alta, no sentido de evitar a subnotificação e sub-registros dos dados, que, na ausência desse, pode ocultar dados e identificar taxas irreais. ============================================== Surgical adverse events are especially relevant because of their impact on patients’ health and because they are preventable events. Despite the growing number of publications in this area, there are still gaps in knowledge about these events in the ambulatory surgical care modality. This study aimed to estimate the incidence of surgical adverse events at a day hospital. It is a retrospective cohort study of 55,879 patients operated in a hospital between 2010 and 2014. The incidence of surgical adverse events was 0.51%. Of these, 0.31% were surgical site infections and 0.19% of other surgical adverse events proportionally distributed in surgical wound dehiscence (12.90%, hemorrhage (5
Mangoni, Arduino A
Increased, often inappropriate, drug exposure, pharmacokinetic and pharmacodynamic changes, reduced homeostatic reserve and frailty increase the risk of adverse drug reactions (ADRs) in the older population, thereby imposing a significant public health burden. Predicting and diagnosing ADRs in old age presents significant challenges for the clinician, even when specific risk scoring systems are available. The picture is further compounded by the potential adverse impact of several drugs on more 'global' health indicators, for example, physical function and independence, and the fragmentation of care (e.g., increased number of treating doctors and care transitions) experienced by older patients during their clinical journey. The current knowledge of drug safety in old age is also curtailed by the lack of efficacy and safety data from pre-marketing studies. Moreover, little consideration is given to individual patients' experiences and reporting of specific ADRs, particularly in the presence of cognitive impairment. Pending additional data on these issues, the close review and monitoring of individual patients' drug prescribing, clinical status and biochemical parameters remain essential to predict and detect ADRs in old age. Recently developed strategies, for example, medication reconciliation and trigger tool methodology, have the potential for ADRs risk mitigation in this population. However, more information is required on their efficacy and applicability in different healthcare settings.
Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.
Full Text Available Background : Increased impedance to flow in the uterine arteries assessed by value of the Doppler is associated with adverse pregnancy outcomes, especially pre-eclampsia. We investigated the predictive value of a uterine artery Doppler in the identification of adverse pregnancy outcomes such as ′pre-eclampsia′ and ′small fetus for gestational age′ (SGA. Materials and Methods: Three hundred and seventy-nine women, with singleton pregnancy, between 18 and 40 years of age, without risk factors, randomly underwent Doppler interrogation of the uterine arteries, between 16-22 weeks of gestation. Those who had a mean pulsatility index (PI of >1.45 were considered to have an abnormal result, and were evaluated and compared with those who had normal results for adverse pregnancy outcomes, including pre-eclampsia and small for gestational age. The relationship between the variables was assessed with the use of the chi-square test. Results : There were 17 cases (4.5% of abnormal uterine artery Doppler results and 15 of them (88.2% developed pre-eclampsia and four cases (23.5% had neonates small for gestational age. For predicting pre-eclampsia, the mean uterine artery PI had to be >1.45, had to have a specificity of 95.5% (95% CI, 70-92%, a sensitivity of 79% (95% CI, 43-82%, a negative predictive value (NPV of 98.9% (95% CI, 72-96%, and a positive predictive value (PPV of 88.2% (95% CI, 68-98%. In the case of ′small for gestational age′ it had to have a specificity of 96.5% (95% CI, 42-68%, a sensitivity of 57% (95% CI, 53-76%, an NPV of 99.2% (95% CI, 70-92%, and a PPV of 23.5% (95% CI, 30-72%. Conclusion : Uterine artery Doppler evaluation at 16-22 weeks of gestation might be an appropriate tool for identifying pregnancies that may be at an increased risk for development of pre-eclampsia and small fetus for gestational age.
Wang, Liangcheng; Matsunaga, Shigetaka; Mikami, Yukiko; Takai, Yasushi; Terui, Katsuo; Seki, Hiroyuki
Placental abruption is a severe obstetric complication of pregnancy that can cause disseminated intravascular coagulation and progress to massive post-partum hemorrhage. Coagulation disorder due to extreme consumption of fibrinogen is considered the main pathogenesis of disseminated intravascular coagulation in patients with placental abruption. The present study sought to determine if the pre-delivery fibrinogen level could predict adverse maternal or neonatal outcomes in patients with placental abruption. This retrospective medical chart review was conducted in a center for maternal, fetal, and neonatal medicine in Japan with 61 patients with placental abruption. Fibrinogen levels prior to delivery were collected and evaluated for the prediction of maternal and neonatal outcomes. The main outcome measures for maternal outcomes were disseminated intravascular coagulation and hemorrhage, and the main outcome measures for neonatal outcomes were Apgar score at 5 min, umbilical artery pH, and stillbirth. The receiver-operator curve and multivariate logistic regression analyses indicated that fibrinogen significantly predicted overt disseminated intravascular coagulation and the requirement of ≥6 red blood cell units, ≥10 fresh frozen plasma units, and ≥20 fresh frozen plasma units for transfusion. Moderate hemorrhage occurred in 71.5% of patients with a decrease in fibrinogen levels to 155 mg/dL. Fibrinogen could also predict neonatal outcomes. Umbilical artery pH neonatal outcomes with placental abruption. © 2016 Japan Society of Obstetrics and Gynecology. © 2016 Japan Society of Obstetrics and Gynecology.
Ivanov, Sergey M; Lagunin, Alexey A; Rudik, Anastasia V; Filimonov, Dmitry A; Poroikov, Vladimir V
Application of structure-activity relationships (SARs) for the prediction of adverse effects of drugs (ADEs) has been reported in many published studies. Training sets for the creation of SAR models are usually based on drug label information which allows for the generation of data sets for many hundreds of drugs. Since many ADEs may not be related to drug consumption, one of the main problems in such studies is the quality of data on drug-ADE pairs obtained from labels. The information on ADEs may be included in three sections of the drug labels: "Boxed warning," "Warnings and Precautions," and "Adverse reactions." The first two sections, especially Boxed warning, usually contain the most frequent and severe ADEs that have either known or probable relationships to drug consumption. Using this information, we have created manually curated data sets for the five most frequent and severe ADEs: myocardial infarction, arrhythmia, cardiac failure, severe hepatotoxicity, and nephrotoxicity, with more than 850 drugs on average for each effect. The corresponding SARs were built with PASS (Prediction of Activity Spectra for Substances) software and had balanced accuracy values of 0.74, 0.7, 0.77, 0.67, and 0.75, respectively. They were implemented in a freely available ADVERPred web service ( http://www.way2drug.com/adverpred/ ), which enables a user to predict five ADEs based on the structural formula of compound. This web service can be applied for estimation of the corresponding ADEs for hits and lead compounds at the early stages of drug discovery.
Zinnat Ara Begum
Full Text Available The study conducted in the Medicine and Skin outpatient departments of Dhaka Medical College, Dhaka revealed 19 cases (7 males, 12 females of adverse drug reactions (ADR out of 160 patients. 31.58% ADRs were of mild type, 42.1% were of moderate and 26.32% were of severe in nature. Gastrointestinal complications were the most frequent adverse effect (56%. Antimicrobial drugs were the most common cause of ADR (42.86% followed by NSAIDs (33.33%. This study is a preliminary study for getting information on the pattern of ADRs in Bangladesh needing further studies.
Lazzeroni, Davide; Bini, Matteo; Camaiora, Umberto; Castiglioni, Paolo; Moderato, Luca; Bosi, Davide; Geroldi, Simone; Ugolotti, Pietro T; Brambilla, Lorenzo; Brambilla, Valerio; Coruzzi, Paolo
Background High levels of serum uric acid have been associated with adverse outcomes in cardiovascular diseases such as myocardial infarction and heart failure. The aim of the current study was to evaluate the prognostic role of serum uric acid levels in patients undergoing cardiac rehabilitation after myocardial revascularization and/or cardiac valve surgery. Design We performed an observational prospective cohort study. Methods The study included 1440 patients with available serum uric acid levels, prospectively followed for 50 ± 17 months. Mean age was 67 ± 11 years; 781 patients (54%) underwent myocardial revascularization, 474 (33%) cardiac valve surgery and 185 (13%) valve-plus-coronary artery by-pass graft surgery. The primary endpoints were overall and cardiovascular mortality while secondary end-points were combined major adverse cardiac and cerebrovascular events. Results Serum uric acid level mean values were 286 ± 95 µmol/l and elevated serum uric acid levels (≥360 µmol/l or 6 mg/dl) were found in 275 patients (19%). Overall mortality (hazard ratio = 2.1; 95% confidence interval: 1.5-3.0; p uric acid levels, even after adjustment for age, gender, arterial hypertension, diabetes, glomerular filtration rate, atrial fibrillation and medical therapy. Moreover, strong positive correlations between serum uric acid level and probability of overall mortality ( p uric acid levels predict mortality and adverse cardiovascular outcome in patients undergoing myocardial revascularization and/or cardiac valve surgery even after the adjustment for age, gender, arterial hypertension, diabetes, glomerular filtration rate and medical therapy.
Acne (46) was commonly reported reaction. Topical steroids, betamethasone sodium phosphate and clobetasol were reported to induce maximum number of reactions (59). Skin (227, 66.9%) was commonly affected organ system. Most of the adverse drug reactions were possible (240, 94.1%) and mild (222, 87%) in nature.
Saikali, Melody; Tanios, Alain; Saab, Antoine
The aim of the study was to evaluate the sensitivity and resource efficiency of a partially automated adverse event (AE) surveillance system for routine patient safety efforts in hospitals with limited resources. Twenty-eight automated triggers from the hospital information system's clinical and administrative databases identified cases that were then filtered by exclusion criteria per trigger and then reviewed by an interdisciplinary team. The system, developed and implemented using in-house resources, was applied for 45 days of surveillance, for all hospital inpatient admissions (N = 1107). Each trigger was evaluated for its positive predictive value (PPV). Furthermore, the sensitivity of the surveillance system (overall and by AE category) was estimated relative to incidence ranges in the literature. The surveillance system identified a total of 123 AEs among 283 reviewed medical records, yielding an overall PPV of 52%. The tool showed variable levels of sensitivity across and within AE categories when compared with the literature, with a relatively low overall sensitivity estimated between 21% and 44%. Adverse events were detected in 23 of the 36 AE categories defined by an established harm classification system. Furthermore, none of the detected AEs were voluntarily reported. The surveillance system showed variable sensitivity levels across a broad range of AE categories with an acceptable PPV, overcoming certain limitations associated with other harm detection methods. The number of cases captured was substantial, and none had been previously detected or voluntarily reported. For hospitals with limited resources, this methodology provides valuable safety information from which interventions for quality improvement can be formulated.
Hogan David B
Full Text Available Abstract Background Few studies have directly compared the competing approaches to identifying frailty in more vulnerable older populations. We examined the ability of two versions of a frailty index (43 vs. 83 items, the Cardiovascular Health Study (CHS frailty criteria, and the CHESS scale to accurately predict the occurrence of three outcomes among Assisted Living (AL residents followed over one year. Methods The three frailty measures and the CHESS scale were derived from assessment items completed among 1,066 AL residents (aged 65+ participating in the Alberta Continuing Care Epidemiological Studies (ACCES. Adjusted risks of one-year mortality, hospitalization and long-term care placement were estimated for those categorized as frail or pre-frail compared with non-frail (or at high/intermediate vs. low risk on CHESS. The area under the ROC curve (AUC was calculated for select models to assess the predictive accuracy of the different frailty measures and CHESS scale in relation to the three outcomes examined. Results Frail subjects defined by the three approaches and those at high risk for decline on CHESS showed a statistically significant increased risk for death and long-term care placement compared with those categorized as either not frail or at low risk for decline. The risk estimates for hospitalization associated with the frailty measures and CHESS were generally weaker with one of the frailty indices (43 items showing no significant association. For death and long-term care placement, the addition of frailty (however derived or CHESS significantly improved on the AUC obtained with a model including only age, sex and co-morbidity, though the magnitude of improvement was sometimes small. The different frailty/risk models did not differ significantly from each other in predicting mortality or hospitalization; however, one of the frailty indices (83 items showed significantly better performance over the other measures in predicting long
Chaya S. Moskowitz
Full Text Available More than 80% of children and young adults diagnosed with invasive cancer will survive five or more years beyond their cancer diagnosis. This population has an increased risk for serious illness- and treatment-related morbidity and premature mortality. A number of these adverse health outcomes, such as cardiovascular disease and some second primary neoplasms, either have modifiable risk factors or can be successfully treated if detected early. Absolute risk models that project a personalized risk of developing a health outcome can be useful in patient counseling, in designing intervention studies, in forming prevention strategies, and in deciding upon surveillance programs. Here, we review existing absolute risk prediction models that are directly applicable to survivors of a childhood cancer, discuss the concepts and interpretation of absolute risk models, and examine ways in which these models can be used applied in clinical practice and public health.
Gach, Emily J; Ip, Ka I; Sameroff, Arnold J; Olson, Sheryl L
Multiple environmental risk factors in early childhood predict a broad range of adverse developmental outcomes. However, most prior longitudinal research has not illuminated explanatory mechanisms. Our main goals were to examine predictive associations between cumulative ecological risk factors in early childhood and children's later externalizing problems and to determine whether these associations were explained by variations in parenting quality. Participants were 241 children (118 girls) at risk for school-age conduct problems and their parents and teachers. Children were approximately 3 years old at Time 1 (T1) and 10 years old at Time 2 (T2). Reports of contextual risk at T1 were used to develop a cumulative risk index consisting of 6 singular risk variables from 3 ecological levels: social resources (low income; social isolation), family resources (marital aggression; poor total family functioning), and maternal resources (single parent status; poor maternal mental health). At T1, parenting variables were measured (corporal punishment, warm responsiveness, maternal efficacy, and negative perceptions of child behavior). At T2, mothers, fathers, and teachers reported child externalizing problems. Johnson's relative weight analysis revealed that the cumulative risk index was a more powerful predictor of age 10 years externalizing behavior than any of the singular contextual risk variables. Adverse parenting mediated the effects of cumulative risk on later child externalizing problems. Our findings have significant implications for understanding long-term effects of multiple contextual risk factors present in early childhood and for the implementation of positive parenting interventions early on. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Discriminative ability of commonly used indices to predict adverse outcomes after poster lumbar fusion: a comparison of demographics, ASA, the modified Charlson Comorbidity Index, and the modified Frailty Index.
Ondeck, Nathaniel T; Bohl, Daniel D; Bovonratwet, Patawut; McLynn, Ryan P; Cui, Jonathan J; Shultz, Blake N; Lukasiewicz, Adam M; Grauer, Jonathan N
As research tools, the American Society of Anesthesiologists (ASA) physical status classification system, the modified Charlson Comorbidity Index (mCCI), and the modified Frailty Index (mFI) have been associated with complications following spine procedures. However, with respect to clinical use for various adverse outcomes, no known study has compared the predictive performance of these indices specifically following posterior lumbar fusion (PLF). This study aimed to compare the discriminative ability of ASA, mCCI, and mFI, as well as demographic factors including age, body mass index, and gender for perioperative adverse outcomes following PLF. A retrospective review of prospectively collected data was performed. Patients undergoing elective PLF with or without interbody fusion were extracted from the 2011-2014 American College of Surgeons National Surgical Quality Improvement Program (NSQIP). Perioperative adverse outcome variables assessed included the occurrence of minor adverse events, severe adverse events, infectious adverse events, any adverse event, extended length of hospital stay, and discharge to higher-level care. Patient comorbidity indices and characteristics were delineated and assessed for discriminative ability in predicting perioperative adverse outcomes using an area under the curve analysis from the receiver operating characteristics curves. In total, 16,495 patients were identified who met the inclusion criteria. The most predictive comorbidity index was ASA and demographic factor was age. Of these two factors, age had the larger discriminative ability for three out of the six adverse outcomes and ASA was the most predictive for one out of six adverse outcomes. A combination of the most predictive demographic factor and comorbidity index resulted in improvements in discriminative ability over the individual components for five of the six outcome variables. For PLF, easily obtained patient ASA and age have overall similar or better
Deilkås, Ellen Tveter; Risberg, Madeleine Borgstedt; Haugen, Marion; Lindstrøm, Jonas Christoffer; Nylén, Urban; Rutberg, Hans; Michael, Soop
In this paper, we explore similarities and differences in hospital adverse event (AE) rates between Norway and Sweden by reviewing medical records with the Global Trigger Tool (GTT). All acute care hospitals in both countries performed medical record reviews, except one in Norway. Records were randomly selected from all eligible admissions in 2013. Eligible admissions were patients 18 years of age or older, undergoing care with an in-hospital stay of at least 24 hours, excluding psychiatric and care and rehabilitation. Reviews were done according to GTT methodology. Similar contexts for healthcare and similar socioeconomic and demographic characteristics have inspired the Nordic countries to exchange experiences from measuring and monitoring quality and patient safety in healthcare. The co-operation has promoted the use of GTT to monitor national and local rates of AEs in hospital care. 10 986 medical records were reviewed in Norway and 19 141 medical records in Sweden. No significant difference between overall AE rates was found between the two countries. The rate was 13.0% (95% CI 11.7% to 14.3%) in Norway and 14.4% (95% CI 12.6% to 16.3%) in Sweden. There were significantly higher AE rates of surgical complications in Norwegian hospitals compared with Swedish hospitals. Swedish hospitals had significantly higher rates of pressure ulcers, falls and 'other' AEs. Among more severe AEs, Norwegian hospitals had significantly higher rates of surgical complications than Swedish hospitals. Swedish hospitals had significantly higher rates of postpartum AEs. The level of patient safety in acute care hospitals, as assessed by GTT, was essentially the same in both countries. The differences between the countries in the rates of several types of AEs provide new incentives for Norwegian and Swedish governing bodies to address patient safety issues. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please
Karpagam, Sylvia; Ma, Nang Laik
Background There is growing attention over the last few years about non-attendance in hospitals and its clinical and economic consequences. There have been several studies documenting the various aspects of non-attendance in hospitals. Project Predicting Appoint Misses (PAM) was started with the intention of being able to predict the type of patients that would not come for appointments after making bookings. Methods Historic hospital appointment data merged with “distance from hospital” variable was used to run Logistic Regression, Support Vector Machine and Recursive Partitioning to decide the contributing variables to missed appointments. Results Variables that are “class”, “time”, “demographics” related have an effect on the target variable, however, prediction models may not perform effectively due to very subtle influence on the target variable. Previously assumed major contributors like “age”, “distance” did not have a major effect on the target variable. Conclusions With the given data it will be very difficult to make any moderate/strong prediction of the Appointment misses. That being said with the help of the cut off we are able to capture all of the “appointment misses” in addition to also capturing the actualized appointments. PMID:28567409
Hospital staff should use more than one method to detect adverse events and potential adverse events: incident reporting, pharmacist surveillance and local real‐time record review may all have a place
Olsen, Sisse; Neale, Graham; Schwab, Kat; Psaila, Beth; Patel, Tejal; Chapman, E Jane; Vincent, Charles
Background Over the past five years, in most hospitals in England and Wales, incident reporting has become well established but it remains unclear how well reports match clinical adverse events. International epidemiological studies of adverse events are based on retrospective, multi‐hospital case record review. In this paper the authors describe the use of incident reporting, pharmacist surveillance and local real‐time record review for the recognition of clinical risks associated with hospital inpatient care. Methodology Data on adverse events were collected prospectively on 288 patients discharged from adult acute medical and surgical units in an NHS district general hospital using incident reports, active surveillance of prescription charts by pharmacists and record review at time of discharge. Results Record review detected 26 adverse events (AEs) and 40 potential adverse events (PAEs) occurring during the index admission. In contrast, in the same patient group, incident reporting detected 11 PAEs and no AEs. Pharmacy surveillance found 10 medication errors all of which were PAEs. There was little overlap in the nature of events detected by the three methods. Conclusion The findings suggest that incident reporting does not provide an adequate assessment of clinical adverse events and that this method needs to be supplemented with other more systematic forms of data collection. Structured record review, carried out by clinicians, provides an important component of an integrated approach to identifying risk in the context of developing a safety and quality improvement programme. PMID:17301203
Ariza, F; Montilla-Coral, D; Franco, O; González, L F; Lozano, L C; Torres, A M; Jordán, J; Blanco, L F; Suárez, L; Cruz, G; Cepeda, M
Multiple studies have analyzed perioperative factors related to adverse events (AEs) in children who require gastrointestinal endoscopic procedures (GEP) in settings where deep sedation is the preferred anesthetic technique over general anesthesia (GA) but not for the opposite case. We reviewed our anesthesia institutional database, seeking children less than 12 years who underwent GEP over a 5-year period. A logistic regression was used to determine significant associations between preoperative conditions, characteristics of the procedure, airway management, anesthetic approaches and the presence of serious and non-serious AEs. GA was preferred over deep sedation [77.8% vs. 22.2% in 2178 GEP under anesthesia care (n=1742)]. We found 96 AEs reported in 77 patients, including hypoxemia (1.82%), bronchospasm (1.14%) and laryngospasm (0.91%) as the most frequent. There were 2 cases of severe bradycardia related to laryngospasm/hypoxemia and a case of aspiration resulting in unplanned hospitalization, but there were no cases of intra- or postoperative deaths. Final predictive model for perioperative AEs included age risk factors and ventilation by facial mask as a protector against these events (prisk factors for AEs in these patients. Copyright © 2013 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.
Liu, Xiangping; Nestic, Danijel; Vukina, Tomislav
We use invoices for hospital services from a regional hospital in Croatia to test for adverse selection and moral hazard. There are three categories of patients: with no supplemental insurance, who bought it, and who are entitled to it for free. Our identification procedure relies on the premise that the difference in the observed medical care consumption between the patients who bought the insurance and those entitled to free insurance is caused by pure selection effect, whereas the difference in healthcare consumption between the group that received the free insurance and the group that has no insurance is due to moral hazard. Results show favorable selection for patients in 20- to 30-year-old cohort and significant moral hazard for all age cohorts. The selection effect reverses its sign in older cohorts explained by the differences in risk aversion across cohorts caused by the timing of transition from socialism to market economy. Copyright © 2011 John Wiley & Sons, Ltd.
Full Text Available BACKGROUND:Pregabalin administration is occasionally abandoned due to adverse events such as somnolence, dizziness, unsteadiness, weight gain and edema. However, the exact causes of these differences in adverse events associated with pregabalin have not been elucidated.
Cami, Aurel; Reis, Ben Y
Accurate prediction of adverse drug events (ADEs) is an important means of controlling and reducing drug-related morbidity and mortality. Since no single "gold standard" ADE data set exists, a range of different drug safety data sets are currently used for developing ADE prediction models. There is a critical need to assess the degree of concordance between these various ADE data sets and to validate ADE prediction models against multiple reference standards. We systematically evaluated the concordance of two widely used ADE data sets - Lexi-comp from 2010 and SIDER from 2012. The strength of the association between ADE (drug) counts in Lexi-comp and SIDER was assessed using Spearman rank correlation, while the differences between the two data sets were characterized in terms of drug categories, ADE categories and ADE frequencies. We also performed a comparative validation of the Predictive Pharmacosafety Networks (PPN) model using both ADE data sets. The predictive power of PPN using each of the two validation sets was assessed using the area under Receiver Operating Characteristic curve (AUROC). The correlations between the counts of ADEs and drugs in the two data sets were 0.84 (95% CI: 0.82-0.86) and 0.92 (95% CI: 0.91-0.93), respectively. Relative to an earlier snapshot of Lexi-comp from 2005, Lexi-comp 2010 and SIDER 2012 introduced a mean of 1,973 and 4,810 new drug-ADE associations per year, respectively. The difference between these two data sets was most pronounced for Nervous System and Anti-infective drugs, Gastrointestinal and Nervous System ADEs, and postmarketing ADEs. A minor difference of 1.1% was found in the AUROC of PPN when SIDER 2012 was used for validation instead of Lexi-comp 2010. In conclusion, the ADE and drug counts in Lexi-comp and SIDER data sets were highly correlated and the choice of validation set did not greatly affect the overall prediction performance of PPN. Our results also suggest that it is important to be aware of the
Subramanyam, Rajeev; Yeramaneni, Samrat; Hossain, Mohamed Monir; Anneken, Amy M; Varughese, Anna M
Perioperative respiratory adverse events (PRAEs) are the most common cause of serious adverse events in children receiving anesthesia. Our primary aim of this study was to develop and validate a risk prediction tool for the occurrence of PRAE from the onset of anesthesia induction until discharge from the postanesthesia care unit in children younger than 18 years undergoing elective ambulatory anesthesia for surgery and radiology. The incidence of PRAE was studied. We analyzed data from 19,059 patients from our department's quality improvement database. The predictor variables were age, sex, ASA physical status, morbid obesity, preexisting pulmonary disorder, preexisting neurologic disorder, and location of ambulatory anesthesia (surgery or radiology). Composite PRAE was defined as the presence of any 1 of the following events: intraoperative bronchospasm, intraoperative laryngospasm, postoperative apnea, postoperative laryngospasm, postoperative bronchospasm, or postoperative prolonged oxygen requirement. Development and validation of the risk prediction tool for PRAE were performed using a split sampling technique to split the database into 2 independent cohorts based on the year when the patient received ambulatory anesthesia for surgery and radiology using logistic regression. A risk score was developed based on the regression coefficients from the validation tool. The performance of the risk prediction tool was assessed by using tests of discrimination and calibration. The overall incidence of composite PRAE was 2.8%. The derivation cohort included 8904 patients, and the validation cohort included 10,155 patients. The risk of PRAE was 3.9% in the development cohort and 1.8% in the validation cohort. Age ≤ 3 years (versus >3 years), ASA physical status II or III (versus ASA physical status I), morbid obesity, preexisting pulmonary disorder, and surgery (versus radiology) significantly predicted the occurrence of PRAE in a multivariable logistic regression
Courjon, J; Pulcini, C; Cua, E; Risso, K; Guillouet, F; Bernard, E; Roger, P-M
Antibiotics are a significant cause of adverse events (AE), but few studies have focused on prescriptions in hospitalized patients. In infectious diseases departments, the high frequency and diversity of antibiotics prescribed makes AE post-marketing monitoring easier. The aim of our study was to assess the incidence and type of AE in the infectious diseases department of a French teaching tertiary-care hospital. The main characteristics of each hospitalization, including all antibiotics prescribed and any significant AE were recorded prospectively in the medical dashboard of the department. We included all patients having suffered an AE due to systemic antibiotics between January 2008 and March 2011. Among the 3963 hospitalized patients, 2682 (68%) received an antibiotic and 151/2682 (5.6%) suffered an AE. Fifty-two (34%) AE were gastrointestinal disorders, 32 (21%) dermatological, 20 (13%) hepatobiliary, 16 (11%) renal and urinary disorders, 13 (9%) neurological and 11 (7%) blood disorders. Rifampin, fosfomycin, cotrimoxazole and linezolid were the leading causes of AE. Sixty-two percent of the antibiotics causing an AE were stopped and 38% were continued (including 11% with a dose modification). Patients suffering from AE had an increased length of stay (18 vs 10 days, P antibiotic when several options are possible.
Prasad, G V Ramesh; Huang, Michael; Silver, Samuel A; Al-Lawati, Ali I; Rapi, Lindita; Nash, Michelle M; Zaltzman, Jeffrey S
Metabolic syndrome (MetS) associates with cardiovascular risk post-kidney transplantation, but its ambiguity impairs understanding of its diagnostic utility relative to components. We compared five MetS definitions and the predictive value of constituent components of significant definitions for major adverse cardiovascular events (MACE) in a cohort of 1182 kidney transplant recipients. MetS definitions were adjusted for noncomponent traditional Framingham risk factors and relevant transplant-related variables. Kaplan-Meier, logistic regression, and Cox proportional hazards analysis were utilized. There were 143 MACE over 7447 patient-years of follow-up. Only the World Health Organization (WHO) 1998 definition predicted MACE (25.3 vs 15.5 events/1000 patient-years, P = 0.019). Time-to-MACE was 5.5 ± 3.5 years with MetS and 6.8 ± 3.9 years without MetS (P < 0.0001). MetS was independent of pertinent MACE risk factors except age and previous cardiac disease. Among MetS components, dysglycemia provided greatest hazard ratio (HR) for MACE (1.814 [95% confidence interval 1.26-2.60]), increased successively by microalbuminuria (HR 1.946 [1.37-2.75]), dyslipidemia (3.284 [1.72-6.26]), hypertension (4.127 [2.16-7.86]), and central obesity (4.282 [2.09-8.76]). MetS did not affect graft survival. In summary, although the WHO 1998 definition provides greatest predictive value for post-transplant MACE, most of this is conferred by dysglycemia and is overshadowed by age and previous cardiac disease. © 2014 Steunstichting ESOT.
Corley, K T T; Corley, M M B
Many Thoroughbred foals are intended to be sold at public auction. The impact of disease conditions necessitating hospital treatment as a foal on future sales performance is unknown. To determine whether Thoroughbred horses that were treated in a hospital before age 125 days and presented to public auction sell for a different mean price than controls. Foals aged horses that were presented to the same sale immediately before and immediately after the subject. Results were controlled for the sale at which the animal presented and the sex of the subject and controls. Sixty-three subjects were presented to public auction: 19 at the foal sales, 39 at the yearling sales and 5 at the 2-year-old sales. Forty-five subjects were sold. There was no difference in the mean sales price (subjects Euros 38,207; controls Euros 35,026) or percentage of animals sold (subjects 71.4%; controls 66.4%) between subjects and controls. If Thoroughbred horses are presented for public auction following hospital treatment as a foal, there is no impact on sales outcome. This information may help commercial breeders of Thoroughbred foals make informed decisions about treatment of their foals.
Starmer, Amy J; Sectish, Theodore C; Simon, Dennis W; Keohane, Carol; McSweeney, Maireade E; Chung, Erica Y; Yoon, Catherine S; Lipsitz, Stuart R; Wassner, Ari J; Harper, Marvin B; Landrigan, Christopher P
Handoff miscommunications are a leading cause of medical errors. Studies comprehensively assessing handoff improvement programs are lacking. To determine whether introduction of a multifaceted handoff program was associated with reduced rates of medical errors and preventable adverse events, fewer omissions of key data in written handoffs, improved verbal handoffs, and changes in resident-physician workflow. Prospective intervention study of 1255 patient admissions (642 before and 613 after the intervention) involving 84 resident physicians (42 before and 42 after the intervention) from July-September 2009 and November 2009-January 2010 on 2 inpatient units at Boston Children's Hospital. Resident handoff bundle, consisting of standardized communication and handoff training, a verbal mnemonic, and a new team handoff structure. On one unit, a computerized handoff tool linked to the electronic medical record was introduced. The primary outcomes were the rates of medical errors and preventable adverse events measured by daily systematic surveillance. The secondary outcomes were omissions in the printed handoff document and resident time-motion activity. Medical errors decreased from 33.8 per 100 admissions (95% CI, 27.3-40.3) to 18.3 per 100 admissions (95% CI, 14.7-21.9; P < .001), and preventable adverse events decreased from 3.3 per 100 admissions (95% CI, 1.7-4.8) to 1.5 (95% CI, 0.51-2.4) per 100 admissions (P = .04) following the intervention. There were fewer omissions of key handoff elements on printed handoff documents, especially on the unit that received the computerized handoff tool (significant reductions of omissions in 11 of 14 categories with computerized tool; significant reductions in 2 of 14 categories without computerized tool). Physicians spent a greater percentage of time in a 24-hour period at the patient bedside after the intervention (8.3%; 95% CI 7.1%-9.8%) vs 10.6% (95% CI, 9.2%-12.2%; P = .03). The average duration of verbal
Ding, Wen Yi; Lee, Chew Kek; Choon, Siew Eng
Adverse drug reactions are most commonly cutaneous in nature. Patterns of cutaneous adverse drug reactions (ADRs) and their causative drugs vary among the different populations previously studied. Our aim is to determine the clinical pattern of drug eruptions and the common drugs implicated, particularly in severe cutaneous ADRs in our population. This study was done by analyzing the database established for all adverse cutaneous drug reactions seen from January 2001 until December 2008. A total of 281 cutaneous ADRs were seen in 280 patients. The most common reaction pattern was maculopapular eruption (111 cases, 39.5%) followed by Stevens-Johnson Syndrome (SJS: 79 cases, 28.1%), drug reaction with eosinophilia and systemic symptoms (DRESS: 19 cases, 6.8%), toxic epidermal necrolysis (TEN: 16 cases, 5.7 %), urticaria/angioedema (15 cases, 5.3%) and fixed drug eruptions (15 cases, 5.3%). Antibiotics (38.8%) and anticonvulsants (23.8%) accounted for 62.6% of the 281 cutaneous ADRs seen. Allopurinol was implicated in 39 (13.9%), carbamazepine in 29 (10.3%), phenytoin in 27 (9.6%) and cotrimoxazole in 26 (9.3%) cases. Carbamazepine, allopurinol and cotrimoxazole were the three main causative drugs of SJS/TEN accounting for 24.0%, 18.8% and 12.5% respectively of the 96 cases seen whereas DRESS was mainly caused by allopurinol (10 cases, 52.6%) and phenytoin (3 cases, 15.8%). The reaction patterns and drugs causing cutaneous ADRs in our population are similar to those seen in other countries although we have a much higher proportion of severe cutaneous ADRs probably due to referral bias, different prescribing habit and a higher prevalence of HLA-B*1502 and HLA-B*5801 which are genetic markers for carbamazepine-induced SJS/TEN and allopurinol-induced SJS/TEN/DRESS respectively. The most common reaction pattern seen in our study population was maculopapular eruptions. Antibiotics, anticonvulsants and NSAIDs were the most frequently implicated drug groups. Carbamazepine
Rothberger, Gary D; Gadhvi, Sonya; Michelakis, Nickolaos; Kumar, Amit; Calixte, Rose; Shapiro, Lawrence E
Thyroid hormone plays an important role in cardiac function. Low levels of serum triiodothyronine (T 3 ) due to nonthyroidal illness syndrome may have adverse effects in heart failure (HF). This study was designed to assess the ability of T 3 to predict in-hospital outcomes in patients with acute HF. In total, 137 patients without thyroid disease or treatment with drugs which affect TH levels, who were hospitalized with acute HF were prospectively enrolled and studied. TH levels were tested upon hospital admission, and outcomes were compared between patients with low (<2.3 pg/ml) and normal (≥2.3 pg/ml) free T 3 levels as well as between those with low (<0.6 ng/ml) and normal (≥0.6 ng/ml) total T 3 levels. Low free T 3 correlated with an increased length of stay in the hospital (median 11 vs 7 days, p <0.001) and higher rates of intensive care unit admission (31.8% vs 16.9%, p = 0.047), with a trend toward increased need for invasive mechanical ventilation (9.0% vs 1.4%, p = 0.056). Low total T3 correlated with an increased length of stay in the hospital (median 11 vs 7 days, p <0.001) and increased need for invasive mechanical ventilation (9.8% vs 1.3%, p = 0.045). In conclusion, low T 3 predicts worse hospital outcomes in patients with acute HF and can be useful in the risk stratification of these patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Miranda, J; Triunfo, S; Rodriguez-Lopez, M; Sairanen, M; Kouru, H; Parra-Saavedra, M; Crovetto, F; Figueras, F; Crispi, F; Gratacós, E
To explore the potential value of third-trimester combined screening for the prediction of adverse perinatal outcome (APO) in the general population and among small-for-gestational-age (SGA) fetuses. This was a nested case-control study within a prospective cohort of 1590 singleton gestations undergoing third-trimester evaluation (32 + 0 to 36 + 6 weeks' gestation). Maternal baseline characteristics, mean arterial blood pressure, fetoplacental ultrasound and circulating biochemical markers (placental growth factor (PlGF), lipocalin-2, unconjugated estriol and inhibin A) were assessed in all women who subsequently had an APO (n = 148) and in a control group without perinatal complications (n = 902). APO was defined as the occurrence of stillbirth, umbilical artery cord blood pH < 7.15, 5-min Apgar score < 7 or emergency operative delivery for fetal distress. Logistic regression models were developed for the prediction of APO in the general population and among SGA cases (defined as customized birth weight < 10 th centile). The prevalence of APO was 9.3% in the general population and 27.4% among SGA cases. In the general population, a combined screening model including a-priori risk (maternal characteristics), estimated fetal weight (EFW) centile, umbilical artery pulsatility index (UA-PI), estriol and PlGF achieved a detection rate for APO of 26% (area under receiver-operating characteristics curve (AUC), 0.59 (95% CI, 0.54-0.65)), at a 10% false-positive rate (FPR). Among SGA cases, a model including a-priori risk, EFW centile, UA-PI, cerebroplacental ratio, estriol and PlGF predicted 62% of APO (AUC, 0.86 (95% CI, 0.80-0.92)) at a FPR of 10%. The use of fetal ultrasound and maternal biochemical markers at 32-36 weeks provides a poor prediction of APO in the general population. Although it remains limited, the performance of the screening model is improved when applied to fetuses with suboptimal fetal growth. Copyright © 2016 ISUOG. Published by John Wiley & Sons
Kondalsamy-Chennakesavan, Srinivas; Bouman, Chantal; De Jong, Suzanne; Sanday, Karen; Nicklin, Jim; Land, Russell; Obermair, Andreas
Advanced gynecological surgery undertaken in a specialized gynecologic oncology unit may be associated with significant perioperative morbidity. Validated risk prediction models are available for general surgical specialties but currently not for gynecological cancer surgery. The objective of this study was to evaluate risk factors for adverse events (AEs) of patients treated for suspected or proven gynecological cancer and to develop a clinical risk score (RS) to predict such AEs. AEs were prospectively recorded and matched with demographical, clinical and histopathological data on 369 patients who had an abdominal or laparoscopic procedure for proven or suspected gynecological cancer at a tertiary gynecological cancer center. Stepwise multiple logistic regression was used to determine the best predictors of AEs. For the risk score (RS), the coefficients from the model were scaled using a factor of 2 and rounded to the nearest integer to derive the risk points. Sum of all the risk points form the RS. Ninety-five patients (25.8%) had at least one AE. Twenty-nine (7.9%) and 77 (20.9%) patients experienced intra- and postoperative AEs respectively with 11 patients (3.0%) experiencing both. The independent predictors for any AE were complexity of the surgical procedure, elevated SGOT (serum glutamic oxaloacetic transaminase, > or /=35 U/L), higher ASA scores and overweight. The risk score can vary from 0 to 14. The risk for developing any AE is described by the formula 100 / (1 + e((3.697 - (RS /2)))). RS allows for quantification of the risk for AEs. Risk factors are generally not modifiable with the possible exception of obesity.
Full Text Available OBJETIVO: Antibióticos são os medicamentos que mais causam eventos adversos, gerando problemas aos pacientes e custos adicionais ao sistema de saúde. Assim, objetivou-se analisar a ocorrência de eventos adversos a antibióticos em pacientes internados em um hospital. MÉTODOS: Realizou-se monitoramento intensivo do uso de antibióticos em pacientes adultos internados no município de Maringá, Paraná, de setembro de 2002 a fevereiro de 2003. Foram pesquisadas variáveis relativas aos medicamentos em uso, em particular aos antibióticos e aos eventos adversos. Com base em critérios para a avaliação do uso correto dos antibióticos, os eventos observados foram classificados como reações adversas, erros de medicação e "quase erros". Para relação de causalidade entre a administração do fármaco e o surgimento dos eventos utilizou-se o algoritmo de Naranjo. RESULTADOS: Foram acompanhados 87 pacientes e identificados 91 eventos adversos, sendo três deles (3,3% reações adversas a medicamentos, sete (7,7% erros de medicação, e 81 (89,0% "quase erros". As reações a medicamentos ocorreram devido ao uso de quinolonas e foram consideradas "prováveis" utilizando-se o algoritmo de Naranjo. Os sete erros de medicação ocorreram devido a quatro prescrições incorretas de dose e três interações medicamentosas. CONCLUSÕES: Os resultados sugerem que a falta de conhecimento do medicamento ou a falta de informação sobre o paciente no momento da prescrição tenham sido os principais fatores envolvidos na ocorrência das reações a medicamentos.OBJECTIVE: Antibiotics are the most common drugs causing adverse events and they lead to problems to patients and additional costs of the health system. The aim of the study was to evaluate the occurrence of adverse events to antibiotics in inpatients of a hospital. METHODS: An extensive drug monitoring was conducted in adult inpatients taking antibiotics in the city of Maringá, Southern
Ramchandani, Manisha; Siddiqui, Muniza; Kanwar, Raveena; Lakha, Manwinder; Phi, Linda; Giacomelli, Luca; Chiappelli, Francesco
The rate of preterm birth is a public health concern worldwide because it is increasing and efforts to prevent it have failed. We report a Clinically Relevant Complex Systematic Review (CSCSR) designed to identify and evaluate the best available evidence in support of the association between periodontal status in women and pregnancy outcome of preterm low birth weight. We hypothesize that the traditional limits of research synthesis must be expanded to incorporate a translational component. As a proof-of-concept model, we propose that this CSCSR can yield greater validity of efficacy and effectiveness through supplementing its recommendations with data of the proteomic signature of periodontal disease in pregnancy, which can contribute to addressing specifically the predictive validity for adverse outcomes. For this CRCSR, systematic reviews were identified through The National Library of MedicinePubmed, The Cochrane library, CINAHL, Google Scholar, Web of Science, and the American Dental Association web library. Independent reviewers quantified the relevance and quality of this literature with R-AMSTAR. Homogeneity and inter-rater reliability testing were supplemented with acceptable sampling analysis. Research synthesis outcomes were analyzed qualitatively toward a Bayesian inference, and converge to demonstrate a definite association between maternal periodontal disease and pregnancy outcome. This CRCSR limits heterogeneity in terms of periodontal disease, outcome measure, selection bias, uncontrolled confounders and effect modifiers. Taken together, the translational CRCSR model we propose suggests that further research is advocated to explore the fundamental mechanisms underlying this association, from a molecular and proteomic perspective.
Full Text Available Introduction: Lactate levels are increasingly used to risk stratify emergency department (ED patients with and without infection. Whether a serum lactate provides similar prognostic value across diseases is not fully elucidated. This study assesses the prognostic value of serum lactate in ED patients with and without infection to both report and compare relative predictive value across etiologies. Methods: We conducted a prospective, observational study of ED patients displaying abnormal vital signs (AVS (heart rate ≥130 bpm, respiratory rate ≥24 bpm, shock index ≥1, and/or systolic blood pressure 4.0mmol/L. Trended stratified lactate levels were associated with deterioration for both infected (p 4mmol/L was an independent predictor of deterioration for patients with infection (OR 4.8, 95% CI: 1.7 – 14.1 and without infection (OR 4.4, 1.7 – 11.5. Conclusion: Lactate levels can risk stratify patients with AVS who have increased risk of adverse outcomes regardless of infection status. [West J Emerg Med. 2017;18(2258-266.
Rothman, Emily F; Edwards, Erika M; Heeren, Timothy; Hingson, Ralph W
Our goal was to determine whether adverse childhood experiences predicted the age at which drinking was initiated and drinking motives in a representative sample of current or former drinkers in the United States. In 2006, a probability sample of 3592 US current or former drinkers aged 18 to 39 were surveyed. Multinomial logistic regression examined whether each of 10 adverse childhood experiences was associated with earlier ages of drinking onset, controlling for demographics, parental alcohol use, parental attitudes toward drinking, and peers' drinking in adolescence. We also examined whether there was a graded relationship between the number of adverse childhood experiences and age of drinking onset and whether adverse childhood experiences were related to self-reported motives for drinking during the first year that respondents drank. Sixty-six percent of respondents reported >or=1 adverse childhood experiences, and 19% reported experiencing >or=4. The most commonly reported adverse childhood experiences were parental separation/divorce (41.3%), living with a household member who was a problem drinker (28.7%), mental illness of a household member (24.8%), and sexual abuse (19.1%). Of the 10 specific adverse childhood experiences assessed, 5 were significantly associated with initiating drinking at or=21 years of age) after adjustment for confounders, including physical abuse, sexual abuse, having a mentally ill household member, substance abuse in the home, and parental discord or divorce. Compared with those without adverse childhood experiences, respondents with adverse childhood experiences were substantially more likely to report that they drank to cope during the first year that they used alcohol. Results suggest that children with particular adverse childhood experiences may initiate drinking earlier than their peers and that they may be more likely to drink to cope with problems (rather than for pleasure or to be social).
Doran, Diane; Hirdes, John P.; Blais, Régis; Baker, G. Ross; Poss, Jeff W.; Li, Xiaoqiang; Dill, Donna; Gruneir, Andrea; Heckman, George; Lacroix, Hélène; Mitchell, Lori; O'Beirne, Maeve; Foebel, Andrea; White, Nancy; Qian, Gan; Nahm, Sang-Myong; Yim, Odilia; Droppo, Lisa; McIsaac, Corrine
Background: The occurrence of adverse events (AEs) in care settings is a patient safety concern that has significant consequences across healthcare systems. Patient safety problems have been well documented in acute care settings; however, similar data for clients in home care (HC) settings in Canada are limited. The purpose of this Canadian study was to investigate AEs in HC, specifically those associated with hospitalization or detected through the Resident Assessment Instrument for Home Care (RAI-HC). Method: A retrospective cohort design was used. The cohort consisted of HC clients from the provinces of Nova Scotia, Ontario, British Columbia and the Winnipeg Regional Health Authority. Results: The overall incidence rate of AEs associated with hospitalization ranged from 6% to 9%. The incidence rate of AEs determined from the RAI-HC was 4%. Injurious falls, injuries from other than fall and medication-related events were the most frequent AEs associated with hospitalization, whereas new caregiver distress was the most frequent AE identified through the RAI-HC. Conclusion: The incidence of AEs from all sources of data ranged from 4% to 9%. More resources are needed to target strategies for addressing safety risks in HC in a broader context. Tools such as the RAI-HC and its Clinical Assessment Protocols, already available in Canada, could be very useful in the assessment and management of HC clients who are at safety risk. PMID:23968676
Shaikh, Sajid A; Robinson, Richard D; Cheeti, Radhika; Rath, Shyamanand; Cowden, Chad D; Rosinia, Frank; Zenarosa, Nestor R; Wang, Hao
Prolonged hospital discharge boarding can impact patient flow resulting in upstream Emergency Department crowding. We aim to determine the risks predicting prolonged hospital discharge boarding and their direct and indirect effects on patient flow. Retrospective review of a single hospital discharge database was conducted. Variables including type of disposition, disposition boarding time, case management consultation, discharge medications prescriptions, severity of illness, and patient homeless status were analyzed in a multivariate logistic regression model. Hospital charges, potential savings of hospital bed hours, and whether detailed discharge instructions provided adequate explanations to patients were also analyzed. A total of 11,527 admissions was entered into final analysis. The median discharge boarding time was approximately 2 h. Adjusted Odds Ratio (AOR) of patients transferring to other hospitals was 7.45 (95% CI 5.35-10.37), to court or law enforcement custody was 2.51 (95% CI 1.84-3.42), and to a skilled nursing facility was 2.48 (95% CI 2.10-2.93). AOR was 0.57 (95% CI 0.47-0.71) if the disposition order was placed during normal office hours (0800-1700). AOR of early case management consultation was 1.52 (95% CI 1.37-1.68) versus 1.73 (95% CI 1.03-2.89) for late consultation. Eighty-eight percent of patients experiencing discharge boarding times within 2 h of disposition expressed positive responses when questioned about the quality of explanations of discharge instructions and follow-up plans based on satisfaction surveys. Similar results (86% positive response) were noted among patients whose discharge boarding times were prolonged (> 2 h, p = 0.44). An average charge of $6/bed/h was noted in all hospital discharges. Maximizing early discharge boarding (≤ 2 h) would have resulted in 16,376 hospital bed hours saved thereby averting $98,256.00 in unnecessary dwell time charges in this study population alone. Type of disposition, case
Brown, Adam J; Teng, Zhongzhao; Calvert, Patrick A; Rajani, Nikil K; Hennessy, Orla; Nerlekar, Nitesh; Obaid, Daniel R; Costopoulos, Charis; Huang, Yuan; Hoole, Stephen P; Goddard, Martin; West, Nick E J; Gillard, Jonathan H; Bennett, Martin R
Although plaque rupture is responsible for most myocardial infarctions, few high-risk plaques identified by intracoronary imaging actually result in future major adverse cardiovascular events (MACE). Nonimaging markers of individual plaque behavior are therefore required. Rupture occurs when plaque structural stress (PSS) exceeds material strength. We therefore assessed whether PSS could predict future MACE in high-risk nonculprit lesions identified on virtual-histology intravascular ultrasound. Baseline nonculprit lesion features associated with MACE during long-term follow-up (median: 1115 days) were determined in 170 patients undergoing 3-vessel virtual-histology intravascular ultrasound. MACE was associated with plaque burden ≥70% (hazard ratio: 8.6; 95% confidence interval, 2.5-30.6; P<0.001) and minimal luminal area ≤4 mm(2) (hazard ratio: 6.6; 95% confidence interval, 2.1-20.1; P=0.036), although absolute event rates for high-risk lesions remained <10%. PSS derived from virtual-histology intravascular ultrasound was subsequently estimated in nonculprit lesions responsible for MACE (n=22) versus matched control lesions (n=22). PSS showed marked heterogeneity across and between similar lesions but was significantly increased in MACE lesions at high-risk regions, including plaque burden ≥70% (13.9±11.5 versus 10.2±4.7; P<0.001) and thin-cap fibroatheroma (14.0±8.9 versus 11.6±4.5; P=0.02). Furthermore, PSS improved the ability of virtual-histology intravascular ultrasound to predict MACE in plaques with plaque burden ≥70% (adjusted log-rank, P=0.003) and minimal luminal area ≤4 mm(2) (P=0.002). Plaques responsible for MACE had larger superficial calcium inclusions, which acted to increase PSS (P<0.05). Baseline PSS is increased in plaques responsible for MACE and improves the ability of intracoronary imaging to predict events. Biomechanical modeling may complement plaque imaging for risk stratification of coronary nonculprit lesions. © 2016
Ekmekci, Ahmet; Cicek, Gokhan; Uluganyan, Mahmut; Gungor, Baris; Osman, Faizel; Ozcan, Kazim Serhan; Bozbay, Mehmet; Ertas, Gokhan; Zencirci, Aycan; Sayar, Nurten; Eren, Mehmet
Admission hyperglycemia is associated with high inhospital and long-term adverse events in patients that undergo primary percutaneous coronary intervention (PCI). We aimed to evaluate whether hyperglycemia predicts inhospital mortality. We prospectively analyzed 503 consecutive patients. The patients were divided into tertiles according to the admission glucose levels. Tertile I: glucose 145 mg/dL (n = 169). Inhospital mortality was 0 in tertile I, 2 in tertile II, and 9 in tertile III (P < .02). Cardiogenic shock occurred more frequently in tertile III compared to tertiles I and II (10% vs 4.1% and 0.6%, respectively, P = .01). Multivariate logistic regression analysis revealed that patients in tertile III had significantly higher risk of inhospital major adverse cardiac events compared to patients in tertile I (odds ratio: 9.55, P < .02). Admission hyperglycemia predicts inhospital adverse cardiac events in mortality and acute ST-segment elevation myocardial infarction in patients that underwent primary PCI.
Full Text Available Background: The epidemiological data based on intensive monitoring studies are limited for the cutaneous adverse drug reactions (CADRs in terms of incidence. Most of earlier Indian studies focused only on types and causative drugs of CADRs. Aim: The aim of this study is to analyze the CADRs with reference to the incidence, its subgroup analysis, causative drugs, and other clinical characteristics in Indian population. Methodology: Intensive monitoring study was carried out over a period of 3 years in the dermatology outpatient and inpatient department. CADRs due to only systematically administered drugs were considered. The WHO definition for CADR, the WHO causality definitions, modified Schumock and Thornton's criteria for preventability, and International Conference on Harmonisation E2A guidelines for seriousness were considered. Incidence was expressed in percentage and its 95% confidence interval. The incidence was analyzed on basis of characteristics of study population and CADRs. Results: A total of 171 CADRs were observed from 37,623 patients. The CADR incidence was 0.45% (95% CI: 0.39–0.53. The incidence did not significantly differ in different age groups and gender. Commonly observed CADRs were maculopapular rash (23.98%, urticaria (21.64%, and fixed drug eruptions (FDEs (18.13%. Antimicrobials (35.18% and nonsteroidal anti-inflammatory drugs (NSAIDs were suspected in all common CADRs. Anti-infective and NSAIDs were most commonly suspected drugs in overall CADRs, maculopapular rash, urticaria, FDEs, and erythema multiforme. The exact nature of drugs remained inaccessible in one-fourth cases due to use of the over-the-counter self-medications. The incidence of preventable and serious and fatal CADRs was 0.08% (95% CI: 0.05–0.11, 0.04% (95% CI: 0.02–0.06, and 0.003% (95% CI: 0.000–0.001, respectively. Conclusion: Ethnic characteristics should be considered while interpreting incidence from the international studies. The
David Franklin Niedrig
Full Text Available Benzodiazepines and "Z-drug" GABA-receptor modulators (BDZ are among the most frequently used drugs in hospitals. Adverse drug events (ADE associated with BDZ can be the result of preventable medication errors (ME related to dosing, drug interactions and comorbidities. The present study evaluated inpatient use of BDZ and related ME and ADE.We conducted an observational study within a pharmacoepidemiological database derived from the clinical information system of a tertiary care hospital. We developed algorithms that identified dosing errors and interacting comedication for all administered BDZ. Associated ADE and risk factors were validated in medical records.Among 53,081 patients contributing 495,813 patient-days BDZ were administered to 25,626 patients (48.3% on 115,150 patient-days (23.2%. We identified 3,372 patient-days (2.9% with comedication that inhibits BDZ metabolism, and 1,197 (1.0% with lorazepam administration in severe renal impairment. After validation we classified 134, 56, 12, and 3 cases involving lorazepam, zolpidem, midazolam and triazolam, respectively, as clinically relevant ME. Among those there were 23 cases with associated adverse drug events, including severe CNS-depression, falls with subsequent injuries and severe dyspnea. Causality for BDZ was formally assessed as 'possible' or 'probable' in 20 of those cases. Four cases with ME and associated severe ADE required administration of the BDZ antagonist flumazenil.BDZ use was remarkably high in the studied setting, frequently involved potential ME related to dosing, co-medication and comorbidities, and rarely cases with associated ADE. We propose the implementation of automated ME screening and validation for the prevention of BDZ-related ADE.
La, Mary K; Sedykh, Alexander; Fourches, Denis; Muratov, Eugene; Tropsha, Alexander
Given that adverse drug effects (ADEs) have led to post-market patient harm and subsequent drug withdrawal, failure of candidate agents in the drug development process, and other negative outcomes, it is essential to attempt to forecast ADEs and other relevant drug-target-effect relationships as early as possible. Current pharmacologic data sources, providing multiple complementary perspectives on the drug-target-effect paradigm, can be integrated to facilitate the inference of relationships between these entities. This study aims to identify both existing and unknown relationships between chemicals (C), protein targets (T), and ADEs (E) based on evidence in the literature. Cheminformatics and data mining approaches were employed to integrate and analyze publicly available clinical pharmacology data and literature assertions interrelating drugs, targets, and ADEs. Based on these assertions, a C-T-E relationship knowledge base was developed. Known pairwise relationships between chemicals, targets, and ADEs were collected from several pharmacological and biomedical data sources. These relationships were curated and integrated according to Swanson's paradigm to form C-T-E triangles. Missing C-E edges were then inferred as C-E relationships. Unreported associations between drugs, targets, and ADEs were inferred, and inferences were prioritized as testable hypotheses. Several C-E inferences, including testosterone → myocardial infarction, were identified using inferences based on the literature sources published prior to confirmatory case reports. Timestamping approaches confirmed the predictive ability of this inference strategy on a larger scale. The presented workflow, based on free-access databases and an association-based inference scheme, provided novel C-E relationships that have been validated post hoc in case reports. With refinement of prioritization schemes for the generated C-E inferences, this workflow may provide an effective computational method for
Berger, Kathryn A.; Ginsberg, Howard S.; Dugas, Katherine D.; Hamel, Lutz H.; Mather, Thomas N.
Background: Lyme borreliosis (LB) is the most commonly reported vector-borne disease in north temperate regions worldwide, affecting an estimated 300,000 people annually in the United States alone. The incidence of LB is correlated with human exposure to its vector, the blacklegged tick (Ixodes scapularis). To date, attempts to model tick encounter risk based on environmental parameters have been equivocal. Previous studies have not considered (1) the differences between relative humidity (RH) in leaf litter and at weather stations, (2) the RH threshold that affects nymphal blacklegged tick survival, and (3) the time required below the threshold to induce mortality. We clarify the association between environmental moisture and tick survival by presenting a significant relationship between the total number of tick adverse moisture events (TAMEs - calculated as microclimatic periods below a RH threshold) and tick abundance each year.Methods: We used a 14-year continuous statewide tick surveillance database and corresponding weather data from Rhode Island (RI), USA, to assess the effects of TAMEs on nymphal populations of I. scapularis. These TAMEs were defined as extended periods of time (>8 h below 82% RH in leaf litter). We fit a sigmoid curve comparing weather station data to those collected by loggers placed in tick habitats to estimate RH experienced by nymphal ticks, and compiled the number of historical TAMEs during the 14-year record.Results: The total number of TAMEs in June of each year was negatively related to total seasonal nymphal tick densities, suggesting that sub-threshold humidity episodes >8 h in duration naturally lowered nymphal blacklegged tick abundance. Furthermore, TAMEs were positively related to the ratio of tick abundance early in the season when compared to late season, suggesting that lower than average tick abundance for a given year resulted from tick mortality and not from other factors.Conclusions: Our results clarify the mechanism
Serup, Jørgen; Sepehri, Mitra; Hutton Carlsen, Katrina
Tattooing is a global trend. Clinical knowledge of complications is based on case reports collected over a century. Larger cohorts reflecting complications associated with contemporary trends are lacking. The study was a retrospective review of a consecutive cohort of patients with tattoo complications diagnosed in the "Tattoo Clinic" of Bispebjerg University Hospital in Copenhagen, Denmark, from 2008 to 2015, based on patient history and systematic clinical examination. A total of 493 tattoo complications in 405 patients were studied. Overall, 184 (37%) presented allergic reactions with plaque elevation in 32.2%, excessive hyperkeratosis in 3.7%, and ulceration in 1.4%, predominantly observed in red tattoos and nuances of red; 66 (13%) presented papulo-nodular reactions, mainly observed in black tattoos (considered non-allergic) and due to pigment agglomeration; 53 (11%) had bacterial infections; 46 (9%) were psycho-social complications; 144 (30%) belonged to several specific diagnostic entities, including photosensitivity, pain syndrome, and lymphopathy. We found no cases of cutaneous or other malignancies. Sarcoidosis was primarily seen in black tattoos and was a common associated disease, found in 23 reactions (5%), compared to the background population. The study introduces a new concept of classification of tattoo complications based on simple tools such as patient history and objective findings supplemented with histology. The study reflects complications originating from presently used tattoo inks, often with organic pigments. The introduced classification has been submitted to the World Health Organisation (WHO) as a proposal to the 11th revision of the International Classification of Diseases. © 2016 S. Karger AG, Basel.
Koller, Tomáš; Piešťanská, Zuzana; Hlavatý, Tibor; Holomáň, Jozef; Glasa, Jozef; Payer, Juraj
Hepatic transit times measured by the contrast enhanced ultrasonography and liver elasticity were found to predict a clinically significant portal hypertension. However, these modalities we not yet sufficiently evaluated in predicting adverse clinical outcome in patients with clinically diagnosed cirrhosis (D´Amico stages > 1), having a clinically significant portal hypertension. The aim of our study was to assess the predictive power of the liver transit times and the liver elasticity on an adverse clinical outcome of clinically diagnosed cirrhosis compared with the MELD score. The study group included 48 consecutive outpatients with cirrhosis in the 2., 3. and 4. DAmico stages. Patients with stage 4 could have jaundice, patients with other complications of portal hypertension were excluded. Transit times were measured from the time of intravenous administration of contrast agent (Sonovue) to a signal appearance in a hepatic vein (hepatic vein arrival time, HVAT) or time difference between the contrast signal in the hepatic artery and hepatic vein (hepatic transit time, HTT) in seconds. Elasticity was measured using the transient elastography (Fibroscan). The transit times and elasticity were measured at baseline and patients were followed for up for 1 year. Adverse outcome of cirrhosis was defined as the appearance of clinically apparent ascites and/or hospitalization for liver disease and/or death within 1 year. The mean age was 61 years, with female/male ratio 23/25. At baseline, the median Child-Pugh score was 5 (IQR 5.0-6.0), MELD 9.5 (IQR 7.6 to 12.1), median HVAT was 22 s (IQR 19-25) and HTT 6 (IQR 5-9). HTT and HVAT negatively correlated with Child-Pugh (-0.351 and -0.441, p = 0.002) and MELD (-0.479 and -0.388, p = 0.006) scores. The adverse outcome at 1-year was observed in 11 cases (22.9 %), including 6 deaths and 5 hospitalizations. Median HVAT in those with/without the adverse outcome was 20 seconds (IQR 19.3-23.5) compared with 22 s (IQR 19-26, p
Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.
The adverse outcome pathway (AOP) framework provides a way of organizing knowledge related to the key biological events that result in a particular health outcome. For the majority of environmental chemicals, the availability of curated pathways characterizing potential toxicity ...
Corona, Giovanni; Cipriani, Sarah; Rastrelli, Giulia; Sforza, Alessandra; Mannucci, Edoardo; Maggi, Mario
The atherogenic role of triglycerides (TG) remains controversial. The aim of the present study is to analyze the contribution of TG in the pathogenesis of erectile dysfunction (ED) and to verify the value of elevated TG in predicting major adverse cardiovascular events (MACE). An unselected series of 3,990 men attending our outpatient clinic for sexual dysfunction was retrospectively studied. A subset of this sample (n = 1,687) was enrolled in a longitudinal study. Several clinical, biochemical, and instrumental (penile color Doppler ultrasound; PCDU) factors were evaluated. Among the patients studied, after adjustment for confounders, higher TG levels were associated with arteriogenic ED and a higher risk of clinical and biochemical hypogonadism. Conversely, no association between TG and other sexual dysfunctions was observed. When pathological PCDU parameters-including flaccid acceleration (<1.17 m/sec(2)) or dynamic peak systolic velocity (PSV <35 cm/sec)-were considered, the negative association between impaired penile flow and higher TG levels was confirmed, even when subjects taking lipid-lowering drugs or those with diabetes were excluded from the analysis (OR = 6.343 [1.243;32.362], P = .026 and 3.576 [1.104;11.578]; P = .34 for impaired acceleration and PSV, respectively). Similarly, when the same adjusted models were applied, TG levels were associated with a higher risk of hypogonadism, independently of the definition criteria (OR = 2.892 [1.643;5.410], P < .0001 and 4.853 [1.965;11.990]; P = .001 for total T <12 and 8 nM, respectively). In the longitudinal study, after adjusting for confounders, elevated TG levels (upper quartile: 162-1686 mg/dL) were independently associated with a higher incidence of MACE (HR = 2.469 [1.019;5.981]; P = .045), when compared to the rest of the sample. Our data suggest an association between elevated TG and arteriogenic ED and its cardiovascular (CV) risk stratification. Whether the use of TG lowering drugs
Nordanger, Dag Ø.; Breivik, Kyrre; Haugland, Bente Storm; Lehmann, Stine; Mæhle, Magne; Braarud, Hanne Cecilie; Hysing, Mari
Background Former studies suggest that prior exposure to adverse experiences such as violence or sexual abuse increases vulnerability to posttraumatic stress reactions in victims of subsequent trauma. However, little is known about how such a history affects responses to terror in the general adolescent population. Objective To explore the role of prior exposure to adverse experiences as risk factors for posttraumatic stress reactions to the Oslo Terror events. Method We used data from 10,220 high school students in a large cross-sectional survey of adolescents in Norway that took place seven months after the Oslo Terror events. Prior exposure assessed was: direct exposure to violence, witnessing of violence, and unwanted sexual acts. We explored how these prior adversities interact with well-established risk factors such as proximity to the events, perceived life threat during the terror events, and gender. Results All types of prior exposure as well as the other risk factors were associated with terror-related posttraumatic stress reactions. The effects of prior adversities were, although small, independent of adolescents’ proximity to the terror events. Among prior adversities, only the effect of direct exposure to violence was moderated by perceived life threat. Exposure to prior adversities increased the risk of posttraumatic stress reactions equally for both genders, but proximity to the terror events and perceived life threat increased the risk more in females. Conclusions Terror events can have a more destabilizing impact on victims of prior adversities, independent of their level of exposure. The findings may be relevant to mental health workers and others providing post-trauma health care. PMID:24872862
Dag Ø. Nordanger
Full Text Available Background: Former studies suggest that prior exposure to adverse experiences such as violence or sexual abuse increases vulnerability to posttraumatic stress reactions in victims of subsequent trauma. However, little is known about how such a history affects responses to terror in the general adolescent population. Objective: To explore the role of prior exposure to adverse experiences as risk factors for posttraumatic stress reactions to the Oslo Terror events. Method: We used data from 10,220 high school students in a large cross-sectional survey of adolescents in Norway that took place seven months after the Oslo Terror events. Prior exposure assessed was: direct exposure to violence, witnessing of violence, and unwanted sexual acts. We explored how these prior adversities interact with well-established risk factors such as proximity to the events, perceived life threat during the terror events, and gender. Results: All types of prior exposure as well as the other risk factors were associated with terror-related posttraumatic stress reactions. The effects of prior adversities were, although small, independent of adolescents’ proximity to the terror events. Among prior adversities, only the effect of direct exposure to violence was moderated by perceived life threat. Exposure to prior adversities increased the risk of posttraumatic stress reactions equally for both genders, but proximity to the terror events and perceived life threat increased the risk more in females. Conclusions: Terror events can have a more destabilizing impact on victims of prior adversities, independent of their level of exposure. The findings may be relevant to mental health workers and others providing post-trauma health care.
Nordanger, Dag Ø; Breivik, Kyrre; Haugland, Bente Storm; Lehmann, Stine; Mæhle, Magne; Braarud, Hanne Cecilie; Hysing, Mari
Former studies suggest that prior exposure to adverse experiences such as violence or sexual abuse increases vulnerability to posttraumatic stress reactions in victims of subsequent trauma. However, little is known about how such a history affects responses to terror in the general adolescent population. To explore the role of prior exposure to adverse experiences as risk factors for posttraumatic stress reactions to the Oslo Terror events. We used data from 10,220 high school students in a large cross-sectional survey of adolescents in Norway that took place seven months after the Oslo Terror events. Prior exposure assessed was: direct exposure to violence, witnessing of violence, and unwanted sexual acts. We explored how these prior adversities interact with well-established risk factors such as proximity to the events, perceived life threat during the terror events, and gender. All types of prior exposure as well as the other risk factors were associated with terror-related posttraumatic stress reactions. The effects of prior adversities were, although small, independent of adolescents' proximity to the terror events. Among prior adversities, only the effect of direct exposure to violence was moderated by perceived life threat. Exposure to prior adversities increased the risk of posttraumatic stress reactions equally for both genders, but proximity to the terror events and perceived life threat increased the risk more in females. Terror events can have a more destabilizing impact on victims of prior adversities, independent of their level of exposure. The findings may be relevant to mental health workers and others providing post-trauma health care.
McFarland, Daniel C; Ornstein, Katherine A; Holcombe, Randall F
Hospital Value-Based Purchasing (HVBP) incentivizes quality performance-based healthcare by linking payments directly to patient satisfaction scores obtained from Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. Lower HCAHPS scores appear to cluster in heterogeneous population-dense areas and could bias Centers for Medicare & Medicaid Services (CMS) reimbursement. Assess nonrandom variation in patient satisfaction as determined by HCAHPS. Multivariate regression modeling was performed for individual dimensions of HCAHPS and aggregate scores. Standardized partial regression coefficients assessed strengths of predictors. Weighted Individual (hospital) Patient Satisfaction Adjusted Score (WIPSAS) utilized 4 highly predictive variables, and hospitals were reranked accordingly. A total of 3907 HVBP-participating hospitals. There were 934,800 patient surveys by the most conservative estimate. A total of 3144 county demographics (US Census) and HCAHPS surveys. Hospital size and primary language (non-English speaking) most strongly predicted unfavorable HCAHPS scores, whereas education and white ethnicity most strongly predicted favorable HCAHPS scores. The average adjusted patient satisfaction scores calculated by WIPSAS approximated the national average of HCAHPS scores. However, WIPSAS changed hospital rankings by variable amounts depending on the strength of the predictive variables in the hospitals' locations. Structural and demographic characteristics that predict lower scores were accounted for by WIPSAS that also improved rankings of many safety-net hospitals and academic medical centers in diverse areas. Demographic and structural factors (eg, hospital beds) predict patient satisfaction scores even after CMS adjustments. CMS should consider WIPSAS or a similar adjustment to account for the severity of patient satisfaction inequities that hospitals could strive to correct. © 2015 Society of Hospital Medicine.
Traditional methods for carcinogenicity testing are resource-intensive, retrospective, and time consuming. An increasing testing burden has generated interest in the adverse outcome pathway (AOP) concept as a tool to evaluate chemical safety in a more efficient, rapid and effecti...
Full Text Available Abstract Background Although numerous risk factors for adverse outcomes for older persons after an acute hospital stay have been identified, a decision making tool combining all available information in a clinically meaningful way would be helpful for daily hospital practice. The purpose of this study was to evaluate the ability of the Method for Assigning Priority Levels for Acute Care (MAPLe-AC to predict adverse outcomes in acute care for older people and to assess its usability as a decision making tool for discharge planning. Methods Data from a prospective multicenter study in five Nordic acute care hospitals with information from admission to a one year follow-up of older acute care patients were compared with a prospective study of acute care patients from admission to discharge in eight hospitals in Canada. The interRAI Acute Care assessment instrument (v1.1 was used for data collection. Data were collected during the first 24 hours in hospital, including pre-morbid and admission information, and at day 7 or at discharge, whichever came first. Based on this information a crosswalk was developed from the original MAPLe algorithm for home care settings to acute care (MAPLe-AC. The sample included persons 75 years or older who were admitted to acute internal medical services in one hospital in each of the five Nordic countries (n = 763 or to acute hospital care either internal medical or combined medical-surgical services in eight hospitals in Ontario, Canada (n = 393. The outcome measures considered were discharge to home, discharge to institution or death. Outcomes in a 1-year follow-up in the Nordic hospitals were: living at home, living in an institution or death, and survival. Logistic regression with ROC curves and Cox regression analyses were used in the analyses. Results Low and mild priority levels of MAPLe-AC predicted discharge home and high and very high priority levels predicted adverse outcome at discharge both in the Nordic
Velasco Munoz, Cesar; Sequera, Víctor-Guillermo; Vilajeliu, Alba; Aldea, Marta; Mena, Guillermo; Quesada, Sebastiana; Varela, Pilar; Olivé, Victoria; Bayas, José M; Trilla, Antoni
During the influenza vaccination campaign 2011-2012 we established a self-declaration system of adverse events (AEs) in healthcare workers (HCW). The aim of this study is to describe the vaccinated population and analyse vaccination coverage and self-declared AEs after the voluntary flu vaccination in a university hospital in Barcelona. Observational study. We used the HCW immunization record to calculate the vaccination coverage. We collected AEs using a voluntary, anonymous, self-administered survey during the 2011-2012 flu vaccination campaign. We performed a logistic regression model to determine the associated factors to declare AEs. The influenza vaccination coverage in HCW was 30.5% (n=1,507/4,944). We received completed surveys from 358 vaccinated HCW (23.8% of all vaccinated). We registered AEs in 186 respondents to the survey (52.0% of all respondents). Of these, 75.3% (n=140) reported local symptoms after the flu vaccination, 9.7% (n=18) reported systemic symptoms and 15.1% (n=28) both local and systemic symptoms. No serious AEs were self-reported. Female sex and aged under 35 were both factors associated with declaring AEs. Our self-reporting system did not register serious AEs in HCW, resulting in an opportunity to improve HCW trust in flu vaccination. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Full Text Available Abstract Background Adverse events are unintended patient injuries that arise from healthcare management resulting in disability, prolonged hospital stay or death. Adverse events that require intensive care admission imply a considerable financial burden to the healthcare system. The epidemiology of adverse events in Belgian hospitals has never been assessed systematically. Findings A multistage retrospective review study of patients requiring a transfer to a higher level of care will be conducted in six hospitals in the province of Limburg. Patient records are reviewed starting from January 2012 by a clinical team consisting of a research nurse, a physician and a clinical pharmacist. Besides the incidence and the level of causation and preventability, also the type of adverse events and their consequences (patient harm, mortality and length of stay will be assessed. Moreover, the adequacy of the patient records and quality/usefulness of the method of medical record review will be evaluated. Discussion This paper describes the rationale for a retrospective review study of adverse events that necessitate a higher level of care. More specifically, we are particularly interested in increasing our understanding in the preventability and root causes of these events in order to implement improvement strategies. Attention is paid to the strengths and limitations of the study design.
Pedersen, Susanne S.; Denollet, Johan; Erdman, Ruud A M
We examined the impact of co-occurring diabetes and hopelessness on 3-year prognosis in percutaneous coronary intervention patients. Consecutive patients (n = 534) treated with the paclitaxel-eluting stent completed a set of questionnaires at baseline and were followed up for 3-year adverse...... clinical events. The incidence of 3-year death/non-fatal myocardial infarction was 3.5% in patients with no risk factors (neither hopelessness nor diabetes), 8.2% in patients with diabetes, 11.2% in patients with high hopelessness, and 15.9% in patients with both factors (p = 0.001). Patients...... with hopelessness (HR: 3.28; 95% CI: 1.49-7.23) and co-occurring diabetes and hopelessness (HR: 4.89; 95% CI: 1.86-12.85) were at increased risk of 3-year adverse clinical events compared to patients with no risk factors, whereas patients with diabetes were at a clinically relevant but not statistically significant...
Hayhurst, Caroline; Monsalves, Eric; Bernstein, Mark; Gentili, Fred; Heydarian, Mostafa; Tsao, May; Schwartz, Michael; Prooijen, Monique van; Millar, Barbara-Ann; Ménard, Cynthia; Kulkarni, Abhaya V.; Laperriere, Norm; Zadeh, Gelareh
Purpose: To define clinical and dosimetric predictors of nonauditory adverse radiation effects after radiosurgery for vestibular schwannoma treated with a 12 Gy prescription dose. Methods: We retrospectively reviewed our experience of vestibular schwannoma patients treated between September 2005 and December 2009. Two hundred patients were treated at a 12 Gy prescription dose; 80 had complete clinical and radiological follow-up for at least 24 months (median, 28.5 months). All treatment plans were reviewed for target volume and dosimetry characteristics; gradient index; homogeneity index, defined as the maximum dose in the treatment volume divided by the prescription dose; conformity index; brainstem; and trigeminal nerve dose. All adverse radiation effects (ARE) were recorded. Because the intent of our study was to focus on the nonauditory adverse effects, hearing outcome was not evaluated in this study. Results: Twenty-seven (33.8%) patients developed ARE, 5 (6%) developed hydrocephalus, 10 (12.5%) reported new ataxia, 17 (21%) developed trigeminal dysfunction, 3 (3.75%) had facial weakness, and 1 patient developed hemifacial spasm. The development of edema within the pons was significantly associated with ARE (p = 0.001). On multivariate analysis, only target volume is a significant predictor of ARE (p = 0.001). There is a target volume threshold of 5 cm3, above which ARE are more likely. The treatment plan dosimetric characteristics are not associated with ARE, although the maximum dose to the 5th nerve is a significant predictor of trigeminal dysfunction, with a threshold of 9 Gy. The overall 2-year tumor control rate was 96%. Conclusions: Target volume is the most important predictor of adverse radiation effects, and we identified the significant treatment volume threshold to be 5 cm3. We also established through our series that the maximum tolerable dose to the 5th nerve is 9 Gy.
Hayhurst, Caroline; Monsalves, Eric; Bernstein, Mark; Gentili, Fred [Gamma Knife Unit, Division of Neurosurgery, University Health Network, Toronto (Canada); Heydarian, Mostafa; Tsao, May [Radiation Medicine Program, Princess Margaret Hospital, Toronto (Canada); Schwartz, Michael [Radiation Oncology Program and Division of Neurosurgery, Sunnybrook Hospital, Toronto (Canada); Prooijen, Monique van [Radiation Medicine Program, Princess Margaret Hospital, Toronto (Canada); Millar, Barbara-Ann; Menard, Cynthia [Radiation Oncology Program, Princess Margaret Hospital, Toronto (Canada); Kulkarni, Abhaya V. [Division of Neurosurgery, Hospital for Sick Children, University of Toronto (Canada); Laperriere, Norm [Radiation Oncology Program, Princess Margaret Hospital, Toronto (Canada); Zadeh, Gelareh, E-mail: Gelareh.Zadeh@uhn.on.ca [Gamma Knife Unit, Division of Neurosurgery, University Health Network, Toronto (Canada)
Purpose: To define clinical and dosimetric predictors of nonauditory adverse radiation effects after radiosurgery for vestibular schwannoma treated with a 12 Gy prescription dose. Methods: We retrospectively reviewed our experience of vestibular schwannoma patients treated between September 2005 and December 2009. Two hundred patients were treated at a 12 Gy prescription dose; 80 had complete clinical and radiological follow-up for at least 24 months (median, 28.5 months). All treatment plans were reviewed for target volume and dosimetry characteristics; gradient index; homogeneity index, defined as the maximum dose in the treatment volume divided by the prescription dose; conformity index; brainstem; and trigeminal nerve dose. All adverse radiation effects (ARE) were recorded. Because the intent of our study was to focus on the nonauditory adverse effects, hearing outcome was not evaluated in this study. Results: Twenty-seven (33.8%) patients developed ARE, 5 (6%) developed hydrocephalus, 10 (12.5%) reported new ataxia, 17 (21%) developed trigeminal dysfunction, 3 (3.75%) had facial weakness, and 1 patient developed hemifacial spasm. The development of edema within the pons was significantly associated with ARE (p = 0.001). On multivariate analysis, only target volume is a significant predictor of ARE (p = 0.001). There is a target volume threshold of 5 cm3, above which ARE are more likely. The treatment plan dosimetric characteristics are not associated with ARE, although the maximum dose to the 5th nerve is a significant predictor of trigeminal dysfunction, with a threshold of 9 Gy. The overall 2-year tumor control rate was 96%. Conclusions: Target volume is the most important predictor of adverse radiation effects, and we identified the significant treatment volume threshold to be 5 cm3. We also established through our series that the maximum tolerable dose to the 5th nerve is 9 Gy.
de Jonge, Ank; Mesman, Jeanette A J M; Manniën, Judith; Zwart, Joost J; van Dillen, Jeroen; van Roosmalen, Jos
women in primary care at the onset of labour with planned home birth had lower rates of severe acute maternal morbidity, postpartum haemorrhage, and manual removal of placenta than those with planned hospital birth. For parous women these differences were statistically significant. Absolute risks were small in both groups. There was no evidence that planned home birth among low risk women leads to an increased risk of severe adverse maternal outcomes in a maternity care system with well trained midwives and a good referral and transportation system.
Posthumus, A G; Birnie, E; van Veen, M J; Steegers, E A P; Bonsel, G J
in the Netherlands the perinatal mortality rate is high compared to other European countries. Around eighty percent of perinatal mortality cases is preceded by being small for gestational age (SGA), preterm birth and/or having a low Apgar-score at 5 minutes after birth. Current risk detection in pregnancy focusses primarily on medical risks. However, non-medical risk factors may be relevant too. Both non-medical and medical risk factors are incorporated in the Rotterdam Reproductive Risk Reduction (R4U) scorecard. We investigated the associations between R4U risk factors and preterm birth, SGA and a low Apgar score. a prospective cohort study under routine practice conditions. six midwifery practices and two hospitals in Rotterdam, the Netherlands. 836 pregnant women. the R4U scorecard was filled out at the booking visit. after birth, the follow-up data on pregnancy outcomes were collected. Multivariate logistic regression was used to fit models for the prediction of any adverse outcome (preterm birth, SGA and/or a low Apgar score), stratified for ethnicity and socio-economic status (SES). factors predicting any adverse outcome for Western women were smoking during the first trimester and over-the-counter medication. For non-Western women risk factors were teenage pregnancy, advanced maternal age and an obstetric history of SGA. Risk factors for high SES women were low family income, no daily intake of vegetables and a history of preterm birth. For low SES women risk factors appeared to be low family income, non-Western ethnicity, smoking during the first trimester and a history of SGA. the presence of both medical and non-medical risk factors early in pregnancy predict the occurrence of adverse outcomes at birth. Furthermore the risk profiles for adverse outcomes differed according to SES and ethnicity. to optimise effective risk selection, both medical and non-medical risk factors should be taken into account in midwifery and obstetric care at the booking visit
Fuchs, Felipe C; Ribeiro, Jorge P; Fuchs, Flávio D; Wainstein, Marco V; Bergoli, Luis C; Wainstein, Rodrigo V; Zen, Vanessa; Kerkhoff, Alessandra C; Moreira, Leila B; Fuchs, Sandra C
The importance of coronary anatomy in predicting cardiovascular events is well known. The use of traditional anatomical scores in routine angiography, however, has not been incorporated to clinical practice. SYNTAX score (SXscore) is a scoring system that estimates the anatomical extent of coronary artery disease (CAD). Its ability to predict outcomes based on a baseline diagnostic angiography has not been tested to date. To evaluate the performance of the SXscore in predicting major adverse cardiac events (MACE) in patients referred for diagnostic angiography. Prospective cohort of 895 patients with suspected CAD referred for elective diagnostic coronary angiography from 2008 to 2011, at a university-affiliated hospital in Brazil. They had their SXscores calculated and were stratified in three categories: no significant CAD (n = 495), SXscoreLOW-INTERMEDIATE: prospectiva de 895 pacientes com suspeita de DAC encaminhados para cineangiocoronariografia diagnóstica eletiva de 2008 a 2011, em hospital universitário no Brasil. Os pacientes tiveram seus SXescores calculados e foram estratificados em três categorias: 'sem DAC significativa' (n = 495); SXescoreBAIXO-INTERMEDIÁRIO: < 23 (n = 346); e SXescoreALTO: ≥ 23 (n = 54). O desfecho primário foi composto de morte cardíaca, infarto do miocárdio e revascularização tardia. Os desfechos secundários foram MACE e morte por todas as causas. Em média, os pacientes foram acompanhados por 1,8 ± 1,4 anos. Desfecho primário ocorreu em 2,2%, 15,3% e 20,4% nos grupos 'sem DAC significativa', SXescoreBAIXO-INTERMEDIÁRIO e SXescoreALTO, respectivamente (p < 0,001). Morte por todas as causas foi significativamente mais frequente no grupo de SXescoreALTO comparado ao grupo 'sem DAC significativa', 16,7% e 3,8% (p < 0,001), respectivamente. Após ajuste para fatores de confusão, todos os desfechos permaneceram associados com o SXescore. O SXescore prediz independentemente MACE em pacientes submetidos a
Chen, Nan; Wen, Xiao-Hong; Huang, Jin-Hua; Wang, Shui-Yun; Zhu, Yue-E
To investigate the predictive value of the qualitative assessment of general movements (GMs) for adverse outcomes at 24 months of age in full-term infants with asphyxia. A total of 114 full-term asphyxiated infants, who were admitted to the neonatal intensive care unit between 2009 and 2012 and took part in follow-ups after discharge were included in the study. All of them received the qualitative assessment of GMs within 3 months after birth. The development quotient was determined with the Bayley Scales of Infant Development at 24 months of age. The results of the qualitative assessment of GMs within 3 months after birth showed that among 114 infants, 20 (17.5%) had poor repertoire movements and 7 (6.1%) had cramped-synchronized movements during the writhing movements period; 8 infants (7.0%) had the absence of fidgety movements during the fidgety movements period. The results of development quotient at 24 months of age showed that 7 infants (6.1%) had adverse developmental outcomes: 6 cases of cerebral palsy and mental retardation and 1 case of mental retardation. There was a poor consistency between poor repertoire movements during the writhing movements period and the developmental outcomes at 24 months of age (Kappa=-0.019; P>0.05). There was a high consistency between cramped-synchronized movements during the writhing movements period and the developmental outcomes at 24 months of age (Kappa=0.848; Ppredictive values of cramped-synchronized movements were shown as follows: predictive validity 98.2%, sensitivity 85.7%, specificity 99.1%, positive predictive value 85.7%, and negative predictive value 99.1%. There was a high consistency between the absence of fidgety movements during the fidgety movements period and the developmental outcomes at 24 months of age (Kappa=0.786; Ppredictive values were expressed as follows: predictive validity 97.4%, sensitivity 85.7%, specificity 98.1%, positive predictive value 75.0%, and negative predictive value 99.1%. Cramped
Mouton, Johannes P; Njuguna, Christine; Kramer, Nicole; Stewart, Annemie; Mehta, Ushma; Blockman, Marc; Fortuin-De Smidt, Melony; De Waal, Reneé; Parrish, Andy G; Wilson, Douglas P K; Igumbor, Ehimario U; Aynalem, Getahun; Dheda, Mukesh; Maartens, Gary; Cohen, Karen
Limited data exist on the burden of serious adverse drug reactions (ADRs) in sub-Saharan Africa, which has high HIV and tuberculosis prevalence. We determined the proportion of adult admissions attributable to ADRs at 4 hospitals in South Africa. We characterized drugs implicated in, risk factors for, and the preventability of ADR-related admissions.We prospectively followed patients admitted to 4 hospitals' medical wards over sequential 30-day periods in 2013 and identified suspected ADRs with the aid of a trigger tool. A multidisciplinary team performed causality, preventability, and severity assessment using published criteria. We categorized an admission as ADR-related if the ADR was the primary reason for admission.There were 1951 admissions involving 1904 patients: median age was 50 years (interquartile range 34-65), 1057 of 1904 (56%) were female, 559 of 1904 (29%) were HIV-infected, and 183 of 1904 (10%) were on antituberculosis therapy (ATT). There were 164 of 1951 (8.4%) ADR-related admissions. After adjustment for age and ATT, ADR-related admission was independently associated (P ≤ 0.02) with female sex (adjusted odds ratio [aOR] 1.51, 95% confidence interval [95% CI] 1.06-2.14), increasing drug count (aOR 1.14 per additional drug, 95% CI 1.09-1.20), increasing comorbidity score (aOR 1.23 per additional point, 95% CI 1.07-1.41), and use of antiretroviral therapy (ART) if HIV-infected (aOR 1.92 compared with HIV-negative/unknown, 95% CI 1.17-3.14). The most common ADRs were renal impairment, hypoglycemia, liver injury, and hemorrhage. Tenofovir disoproxil fumarate, insulin, rifampicin, and warfarin were most commonly implicated, respectively, in these 4 ADRs. ART, ATT, and/or co-trimoxazole were implicated in 56 of 164 (34%) ADR-related admissions. Seventy-three of 164 (45%) ADRs were assessed as preventable.In our survey, approximately 1 in 12 admissions was because of an ADR. The range of ADRs and implicated drugs reflect South Africa's high HIV
Chao, Angel; Lai, Chyong-Huey; Wang, Tzu-Hao; Jung, Shih-Ming; Lee, Yun-Shien; Chang, Wei-Yang; Yang, Lan-Yang; Ku, Fei-Chun; Huang, Huei-Jean; Chao, An-Shine; Wang, Chin-Jung; Chang, Ting-Chang; Wu, Ren-Chin
We investigated whether genomic scar signatures associated with homologous recombination deficiency (HRD), which include telomeric allelic imbalance (TAI), large-scale transition (LST), and loss of heterozygosity (LOH), can predict clinical outcomes in patients with ovarian clear cell carcinoma (OCCC). We enrolled patients with OCCC (n = 80) and high-grade serous carcinoma (HGSC; n = 92) subjected to primary cytoreductive surgery, most of whom received platinum-based adjuvant chemotherapy. Genomic scar signatures based on genome-wide copy number data were determined in all participants and investigated in relation to prognosis. OCCC had significantly lower genomic scar signature scores than HGSC (p < 0.001). Near-triploid OCCC specimens showed higher TAI and LST scores compared with diploid tumors (p < 0.001). While high scores of these genomic scar signatures were significantly associated with better clinical outcomes in patients with HGSC, the opposite was evident for OCCC. Multivariate survival analysis in patients with OCCC identified high LOH scores as the main independent adverse predictor for both cancer-specific (hazard ratio [HR] = 3.22, p = 0.005) and progression-free survival (HR = 2.54, p = 0.01). In conclusion, genomic scar signatures associated with HRD predict adverse clinical outcomes in patients with OCCC. The LOH score was identified as the strongest prognostic indicator in this patient group. Genomic scar signatures associated with HRD are less frequent in OCCC than in HGSC. Genomic scar signatures associated with HRD have an adverse prognostic impact in patients with OCCC. LOH score is the strongest adverse prognostic factor in patients with OCCC.
Egeberg, Alexander; Bruun, Louise E; Mallbris, Lotus
BACKGROUND: Patients with psoriasis may have increased risk of major adverse cardiovascular (CV) events (MACE), and a family history of CV disease (CVD) is an independent risk factor for MACE. OBJECTIVE: We investigated the risk of first-time MACE in patients with psoriasis with or without a fami....... The findings call for increased focus on a family history of CVD in CV risk assessment of patients with psoriasis.......BACKGROUND: Patients with psoriasis may have increased risk of major adverse cardiovascular (CV) events (MACE), and a family history of CV disease (CVD) is an independent risk factor for MACE. OBJECTIVE: We investigated the risk of first-time MACE in patients with psoriasis with or without a family...... history of CVD. METHODS: Between January 1, 1997, and December 31, 2011, we identified 2,722,375 individuals, including 25,774 and 4504 patients with mild and severe psoriasis, through administrative registers. Incidence rate ratios were estimated by Poisson regression. RESULTS: Mean baseline age was 26...
Wang, Florence T; Xue, Fei; Ding, Yan; Ng, Eva; Critchlow, Cathy W; Dore, David D
Post-marketing safety studies of medicines often rely on administrative claims databases to identify adverse outcomes following drug exposure. Valid ascertainment of outcomes is essential for accurate results. We aim to quantify the validity of diagnostic codes for serious hypocalcemia and dermatologic adverse events from insurance claims data among women with postmenopausal osteoporosis (PMO). We identified potential cases of serious hypocalcemia and dermatologic events through ICD-9 diagnosis codes among women with PMO within claims from a large US healthcare insurer (June 2005-May 2010). A physician adjudicated potential hypocalcemic and dermatologic events identified from the primary position on emergency department (ED) or inpatient claims through medical record review. Positive predictive values (PPVs) and 95% confidence intervals (CIs) quantified the fraction of potential cases that were confirmed. Among 165,729 patients with PMO, medical charts were obtained for 40 of 55 (73%) potential hypocalcemia cases; 16 were confirmed (PPV 40%, 95% CI 25-57%). The PPV was higher for ED than inpatient claims (82 vs. 24%). Among 265 potential dermatologic events (primarily urticaria or rash), we obtained 184 (69%) charts and confirmed 128 (PPV 70%, 95% CI 62-76%). The PPV was higher for ED than inpatient claims (77 vs. 39%). Diagnostic codes for hypocalcemia and dermatologic events may be sufficient to identify events giving rise to emergency care, but are less accurate for identifying events within hospitalizations.
Full Text Available To examine the prevalence of snoring during pregnancy and its effects on key pregnancy outcomes.Pregnant women were consecutively recruited in their first trimester. Habitual snoring was screened by using a questionnaire in the 1st and 3rd trimester, respectively. According to the time of snoring, participants were divided into pregnancy onset snorers, chronic snorers and non-snorers. Logistic regressions were performed to examine the associations between snoring and pregnancy outcomes.Of 3 079 pregnant women, 16.6% were habitual snorers, with 11.7% were pregnancy onset snorers and 4.9% were chronic snorers. After adjusting for potential confounders, chronic snorers were independently associated with gestational diabetes mellitus (GDM (RR 1.66, 95%CI 1.09-2.53. Both pregnancy onset and chronic snorers were independently associated with placental adhesion (RR 1.96, 95%CI 1.17-3.27, and RR 2.33, 95%CI 1.22-4.46, respectively. Pregnancy onset snorers were at higher risk of caesarean delivery (RR 1.37, 95%CI 1.09-1.73 and having macrosomia (RR 1.54, 95%CI 1.05-2.27 and large for gestational age (LGA (RR 1.71, 95%CI 1.31-2.24 infants. In addition, being overweight or obese before pregnancy plays an important role in mediating snoring and adverse pregnancy outcomes.Maternal snoring may increase the risk of adverse pregnancy outcomes, and being overweight or obese before pregnancy with snoring is remarkable for researchers. Further studies are still needed to confirm our results.
Full Text Available Maternal nutritional status is an important predictor of birth outcomes, yet little is known about the nutritional status of HIV-infected pregnant women treated with combination antiretroviral therapy (cART. We therefore examined the relationship between maternal BMI at study enrollment, gestational weight gain (GWG, and hemoglobin concentration (Hb among 166 women initiating cART in rural Uganda.Prospective cohort.HIV-infected, ART-naïve pregnant women were enrolled between 12 and 28 weeks gestation and treated with a protease inhibitor or non-nucleoside reverse transcriptase inhibitor-based combination regimen. Nutritional status was assessed monthly. Neonatal anthropometry was examined at birth. Outcomes were evaluated using multivariate analysis.Mean GWG was 0.17 kg/week, 14.6% of women experienced weight loss during pregnancy, and 44.9% were anemic. Adverse fetal outcomes included low birth weight (LBW (19.6%, preterm delivery (17.7%, fetal death (3.9%, stunting (21.1%, small-for-gestational age (15.1%, and head-sparing growth restriction (26%. No infants were HIV-infected. Gaining <0.1 kg/week was associated with LBW, preterm delivery, and a composite adverse obstetric/fetal outcome. Maternal weight at 7 months gestation predicted LBW. For each g/dL higher mean Hb, the odds of small-for-gestational age decreased by 52%.In our cohort of HIV-infected women initiating cART during pregnancy, grossly inadequate GWG was common. Infants whose mothers gained <0.1 kg/week were at increased risk for LBW, preterm delivery, and composite adverse birth outcomes. cART by itself may not be sufficient for decreasing the burden of adverse birth outcomes among HIV-infected women.Clinicaltrials.gov NCT00993031.
Full Text Available Lots has been written on use of SSRI during pregnancy and possible short and long term negative outcomes on neonates. the literature so far has described a various field of peripartum illness related to SSRI exposure during foetal life, such as increased incidence of low birth weight, respiratory distress, persistent pulmonary hypertension, poor feeding, and neurobehavioural disease. We know that different degrees of outcomes are possible, and not all the newborns exposed to SSRIs during pregnancy definitely will develop a negative outcome. So far, still little is known about the possible etiologic mechanism that could not only explain the adverse neonatal effects but also the degree of clinical involvement and presentation in the early period after birth. Pharmacogenetics and moreover pharmacogenomics, the study of specific genetic variations and their effect on drug response, are not widespread. This review describes possible relationship between SSRIs pharmacogenetics and different neonatal outcomes and summarizes the current pharmacogenetic inquiries in relation to maternal-foetal environment.
Craig, Darren G N; Bates, Caroline M; Davidson, Janice S; Martin, Kirsty G; Hayes, Peter C; Simpson, Kenneth J
AIMS Paracetamol (acetaminophen) poisoning remains the major cause of severe acute hepatotoxicity in the UK. In this large single centre cohort study we examined the clinical impact of staggered overdoses and delayed presentation following paracetamol overdose. RESULTS Between 1992 and 2008, 663 patients were admitted with paracetamol-induced severe liver injury, of whom 161 (24.3%) had taken a staggered overdose. Staggered overdose patients were significantly older and more likely to abuse alcohol than single time point overdose patients. Relief of pain (58.2%) was the commonest rationale for repeated supratherapeutic ingestion. Despite lower total ingested paracetamol doses and lower admission serum alanine aminotransferase concentrations, staggered overdose patients were more likely to be encephalopathic on admission, require renal replacement therapy or mechanical ventilation and had higher mortality rates compared with single time point overdoses (37.3% vs. 27.8%, P = 0.025), although this overdose pattern did not independently predict death. The King's College poor prognostic criteria had reduced sensitivity (77.6, 95% CI 70.8, 81.5) for this pattern of overdose. Of the 396/450 (88.0%) single time point overdoses in whom accurate timings could be obtained, 178 (44.9%) presented to medical services >24 h following overdose. Delayed presentation beyond 24 h post overdose was independently associated with death/liver transplantation (OR 2.25, 95% CI 1.23, 4.12, P = 0.009). CONCLUSIONS Both delayed presentation and staggered overdose pattern are associated with adverse outcomes following paracetamol overdose. These patients are at increased risk of developing multi-organ failure and should be considered for early transfer to specialist liver centres. PMID:22106945
Keuffel, Eric; McCullough, Peter A; Todoran, Thomas M; Brilakis, Emmanouil S; Palli, Swetha R; Ryan, Michael P; Gunnarsson, Candace
To determine the net economic impact of switching from low-osmolar contrast media (LOCM) to iso-osmolar contrast media (IOCM; iodixanol) in patients undergoing inpatient coronary or peripheral angioplasty in the United States (US). A budget impact model (BIM) was developed from a hospital perspective. Nationally representative procedural and contrast media prevalence rates, along with MARCE (major adverse renal cardiovascular event) incidence and episode-related cost data were derived from Premier Hospital Data (October 2014 to September 2015). A previously estimated relative risk reduction in MARCE associated with IOCM usage (9.3%) was applied. The higher cost of IOCM was included when calculating the net impact estimates at the aggregate, hospital type, and per hospital levels. One-way (±25%) and probabilistic sensitivity analyses identified the model's most important inputs. Based on weighted analysis, 513,882 US inpatient angioplasties and 35,610 MARCE cases were estimated annually. Switching to an "IOCM only" strategy from a "LOCM only" strategy increases contrast media cost, but prevents 2,900 MARCE events. The annual budget impact was an estimated saving of $30.71 million, aggregated across all US hospitals, $6,316 per hospital, or $60 per procedure. Net savings were maintained across all univariate sensitivity analyses. While MARCE/event-free cost differential was the most important factor driving total net savings for hospitals in the Northeast and West, procedural volume was important in the Midwest and rural locations. Switching to an "IOCM only" strategy from a "LOCM only" approach yields substantial net global savings to hospitals, both at the national level and within hospital sub-groups. Hospital administrators should maintain awareness of the factors that are likely to be more influential for their hospital and recognize that purchasing on the basis of lower contrast media cost may result in higher overall costs for patients undergoing inpatient
Hegendörfer, Eralda; Vaes, Bert; Andreeva, Elena; Matheï, Catharina; Van Pottelbergh, Gijs; Degryse, Jean-Marie
Forced expiratory volume in 1 second (FEV 1 ) is proposed as a marker of healthy ageing and FEV 1 expressions that are independent of reference values have been reported to be better at predicting mortality in older adults. We assess and compare the predictive value of different FEV 1 expressions for mortality, hospitalization, and physical and mental decline in adults aged 80 and older. Population-based, prospective, cohort study. The BELFRAIL study, Belgium. A total of 501 community-dwelling adults aged 80 and older (mean age 84.7 years). Baseline FEV 1 expressed as percent predicted (FEV 1 PP) and z-score (FEV 1 Z) using the Global Lung Function Initiative 2012 reference values; over lowest sex-specific percentile (FEV 1 Q), and height squared (FEV 1 /Ht 2 ) and cubed (FEV 1 /Ht 3 ). Mortality data until 5.1 ± 0.2 years from baseline; hospitalization data until 3.0 ± 0.25 years. Activities of daily living, battery of physical performance tests, Mini-Mental State Examination, and 15-item Geriatric Depression Scale at baseline and after 1.7 ± 0.2 years. Individuals in the lowest quartile of FEV 1 expressions had higher adjusted risk than the rest of study population for all-cause mortality (highest hazard ratio 2.05 [95% Confidence Interval 1.50-2.80] for FEV 1 Q and 2.01 [1.47-2.76] for FEV 1 /Ht 3 ), first hospitalization (highest hazard ratio 1.63 [1.21-2.16] for FEV 1 /Ht 2 and 1.61[1.20-2.16] for FEV 1 /Ht 3 ), mental decline (highest odds ratio 2.80 [1.61-4.89] for FEV 1 Q) and physical decline (only FEV 1 /Ht 3 with odds ratio 1.93 [1.13-3.30]). Based on risk classification improvement measures, FEV 1 /Ht 3 and FEV 1 Q performed better than FEV 1 PP. In a cohort of adults aged 80 and older, FEV 1 expressions that are independent of reference values (FEV 1 /Ht 3 and FEV 1 Q) were better at predicting adverse health outcomes than traditional expressions that depend on reference values, and should be used in further research on FEV 1 and aging
Full Text Available Muhammet Hulusi Satilmisoglu,1 Sinem Ozbay Ozyilmaz,1 Mehmet Gul,1 Hayriye Ak Yildirim,2 Osman Kayapinar,3 Kadir Gokturk,4 Huseyin Aksu,1 Korhan Erkanli,5 Abdurrahman Eksik1 1Department of Cardiology, 2Department of Biochemistry, Mehmet Akif Ersoy Thoracic and Cardiovascular Surgery Training and Research Hospital, Istanbul, 3Department of Cardiology, Duzce University Faculty of Medicine, Duzce, 4Department of Infectious Diseases, 5Department of Thoracic and Cardiovascular Surgery, Mehmet Akif Ersoy Thoracic and Cardiovascular Surgery Training and Research Hospital, Istanbul, Turkey Purpose: To determine the predictive values of D-dimer assay, Global Registry of Acute Coronary Events (GRACE and Thrombolysis in Myocardial Infarction (TIMI risk scores for adverse outcome in patients with non-ST-segment elevation myocardial infarction (NSTEMI.Patients and methods: A total of 234 patients (mean age: 57.2±11.7 years, 75.2% were males hospitalized with NSTEMI were included. Data on D-dimer assay, GRACE and TIMI risk scores were recorded. Logistic regression analysis was conducted to determine the risk factors predicting increased mortality.Results: Median D-dimer levels were 349.5 (48.0–7,210.0 ng/mL, the average TIMI score was 3.2±1.2 and the GRACE score was 90.4±27.6 with high GRACE scores (>118 in 17.5% of patients. The GRACE score was correlated positively with both the D-dimer assay (r=0.215, P=0.01 and TIMI scores (r=0.504, P=0.000. Multivariate logistic regression analysis revealed that higher creatinine levels (odds ratio =18.465, 95% confidence interval: 1.059–322.084, P=0.046 constituted the only significant predictor of increased mortality risk with no predictive values for age, D-dimer assay, ejection fraction, glucose, hemoglobin A1c, sodium, albumin or total cholesterol levels for mortality.Conclusion: Serum creatinine levels constituted the sole independent determinant of mortality risk, with no significant values for D
Full Text Available It is not known whether biomarkers of hemodynamic stress, myocardial necrosis, and renal function might predict adverse outcome in patients undergoing percutaneous repair of severe mitral valve insufficiency. Thus, we aimed to assess the predictive value of various established and emerging biomarkers for major adverse cardiovascular events (MACE in these patients.Thirty-four patients with symptomatic severe mitral valve insufficiency with a mean STS-Score for mortality of 12.6% and a mean logistic EuroSCORE of 19.7% undergoing MitraClip therapy were prospectively included in this study. Plasma concentrations of mid regional-proatrial natriuretic peptide (MR-proANP, Cystatin C, high-sensitive C-reactive protein (hsCRP, high-sensitive troponin T (hsTnT, N-terminal B-type natriuretic peptide (NT-proBNP, galectin-3, and soluble ST-2 (interleukin 1 receptor-like 1 were measured directly before procedure. MACE was defined as cardiovascular death and hospitalization for heart failure (HF.During a median follow-up of 211 days (interquartile range 133 to 333 days, 9 patients (26.5% experienced MACE (death: 7 patients, rehospitalization for HF: 2 patients. Thirty day MACE-rate was 5.9% (death: 2 patients, no rehospitalization for HF. Baseline concentrations of hsTnT (Median 92.6 vs 25.2 ng/L, NT-proBNP (Median 11251 vs 1974 pg/mL and MR-proANP (Median 755.6 vs 318.3 pmol/L, all p<0.001 were clearly higher in those experiencing an event vs event-free patients, while other clinical variables including STS-Score and logistic EuroSCORE did not differ significantly. In Kaplan-Meier analyses, NT-proBNP and in particular hsTnT and MR-proANP above the median discriminated between those experiencing an event vs event-free patients. This was further corroborated by C-statistics where areas under the ROC curve for prediction of MACE using the respective median values were 0.960 for MR-proANP, 0.907 for NT-proBNP, and 0.822 for hsTnT.MR-proANP and hsTnT strongly
Rao, B.S. Satish; Mumbrekar, K.D.; Goutham, H.V.; Donald, J.F.; Vadhiraja, M.B.; Satyamoorthy, K.
We aimed at evaluating the predictive potential of DSB repair kinetics (using γH2AX foci assay) in lymphocytes and analysed the genetic variants in the selected radioresponsive candidate genes like XRCC3, LIG4, NBN, CD44, RAD9A, LIG3, SH3GL1, BAXS, XRCC1, MAD2L2 on the individual susceptibility to radiotherapy (RT) induced acute skin reactions among the head and neck cancer (HNC), and breast cancer (BC) patients. All the 183 HNC and 132 BC patients were treated by a 3-dimensional conformal RT technique
O'Connor, Melissa; Murtaugh, Christopher M.; Shah, Shivani; Barrón-Vaya, Yolanda; Bowles, Kathryn H.; Peng, Timothy R.; Zhu, Carolyn W.; Feldman, Penny H.
Heart failure is difficult to manage and increasingly common with many individuals experiencing frequent hospitalizations. Little is known about patient factors consistently associated with hospital readmission. A literature review was conducted to identify heart failure patient characteristics, measured before discharge, that contribute to variation in hospital readmission rates. Database searches yielded 950 potential articles, of which 34 studies met inclusion criteria. Patient characteristics generally have a very modest effect on all-cause or heart failure–related readmission within 7 to 180 days of index hospital discharge. A range of cardiac diseases and other comorbidities only minimally increase readmission rates. No single patient characteristic stands out as a key contributor across multiple studies underscoring the challenge of developing successful interventions to reduce readmissions. Interventions may need to be general in design with the specific intervention depending on each patient's unique clinical profile. PMID:26180045
O'Connor, Melissa; Murtaugh, Christopher M; Shah, Shivani; Barrón-Vaya, Yolanda; Bowles, Kathryn H; Peng, Timothy R; Zhu, Carolyn W; Feldman, Penny H
Heart failure is difficult to manage and increasingly common with many individuals experiencing frequent hospitalizations. Little is known about patient factors consistently associated with hospital readmission. A literature review was conducted to identify heart failure patient characteristics, measured before discharge, that contribute to variation in hospital readmission rates. Database searches yielded 950 potential articles, of which 34 studies met inclusion criteria. Patient characteristics generally have a very modest effect on all-cause or heart failure-related readmission within 7 to 180 days of index hospital discharge. A range of cardiac diseases and other comorbidities only minimally increase readmission rates. No single patient characteristic stands out as a key contributor across multiple studies underscoring the challenge of developing successful interventions to reduce readmissions. Interventions may need to be general in design with the specific intervention depending on each patient's unique clinical profile. © The Author(s) 2015.
Conway, Sadie H; Pompeii, Lisa A; Gimeno Ruiz de Porras, David; Follis, Jack L; Roberts, Robert E
Working long hours has been associated with adverse health outcomes. However, a definition of long work hours relative to adverse health risk has not been established. Repeated measures of work hours among approximately 2,000 participants from the Panel Study of Income Dynamics (1986-2011), conducted in the United States, were retrospectively analyzed to derive statistically optimized cutpoints of long work hours that best predicted three health outcomes. Work-hours cutpoints were assessed for model fit, calibration, and discrimination separately for the outcomes of poor self-reported general health, incident cardiovascular disease, and incident cancer. For each outcome, the work-hours threshold that best predicted increased risk was 52 hours per week or more for a minimum of 10 years. Workers exposed at this level had a higher risk of poor self-reported general health (relative risk (RR) = 1.28; 95% confidence interval (CI): 1.06, 1.53), cardiovascular disease (RR = 1.42; 95% CI: 1.24, 1.63), and cancer (RR = 1.62; 95% CI: 1.22, 2.17) compared with those working 35-51 hours per week for the same duration. This study provides the first health risk-based definition of long work hours. Further examination of the predictive power of this cutpoint on other health outcomes and in other study populations is needed. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: firstname.lastname@example.org.
Satilmisoglu, Muhammet Hulusi; Ozyilmaz, Sinem Ozbay; Gul, Mehmet; Ak Yildirim, Hayriye; Kayapinar, Osman; Gokturk, Kadir; Aksu, Huseyin; Erkanli, Korhan; Eksik, Abdurrahman
Purpose To determine the predictive values of D-dimer assay, Global Registry of Acute Coronary Events (GRACE) and Thrombolysis in Myocardial Infarction (TIMI) risk scores for adverse outcome in patients with non-ST-segment elevation myocardial infarction (NSTEMI). Patients and methods A total of 234 patients (mean age: 57.2±11.7 years, 75.2% were males) hospitalized with NSTEMI were included. Data on D-dimer assay, GRACE and TIMI risk scores were recorded. Logistic regression analysis was conducted to determine the risk factors predicting increased mortality. Results Median D-dimer levels were 349.5 (48.0–7,210.0) ng/mL, the average TIMI score was 3.2±1.2 and the GRACE score was 90.4±27.6 with high GRACE scores (>118) in 17.5% of patients. The GRACE score was correlated positively with both the D-dimer assay (r=0.215, P=0.01) and TIMI scores (r=0.504, P=0.000). Multivariate logistic regression analysis revealed that higher creatinine levels (odds ratio =18.465, 95% confidence interval: 1.059–322.084, P=0.046) constituted the only significant predictor of increased mortality risk with no predictive values for age, D-dimer assay, ejection fraction, glucose, hemoglobin A1c, sodium, albumin or total cholesterol levels for mortality. Conclusion Serum creatinine levels constituted the sole independent determinant of mortality risk, with no significant values for D-dimer assay, GRACE or TIMI scores for predicting the risk of mortality in NSTEMI patients. PMID:28408834
Blackmore, Amanda Marie; Bear, Natasha; Blair, Eve; Langdon, Katherine; Moshovis, Lisa; Steer, Kellie; Wilson, Andrew C
To determine the early predictors of respiratory hospital admissions in young people with cerebral palsy (CP). A 3-year prospective cohort study using linked data. Children and young people with CP, aged 1 to 26 years. Self-reported and carer-reported respiratory symptoms were linked to respiratory hospital admissions (as defined by the International Statistical Classification of Diseases and Related Health Problems 10th Revision codes) during the following 3 years. 482 participants (including 289 males) were recruited. They were aged 1 to 26 years (mean 10 years, 10 months; SD 5 years, 11 months) at the commencement of the study, and represented all Gross Motor Function Classification Scale (GMFCS) levels. During the 3-year period, 55 (11.4%) participants had a total of 186 respiratory hospital admissions, and spent a total of 1475 days in hospital. Statistically significant risk factors for subsequent respiratory hospital admissions over 3 years in univariate analyses were GMFCS level V, at least one respiratory hospital admission in the year preceding the survey, oropharyngeal dysphagia, seizures, frequent respiratory symptoms, gastro-oesophageal reflux disease, at least two courses of antibiotics in the year preceding the survey, mealtime respiratory symptoms and nightly snoring. Most risk factors for respiratory hospital admissions are potentially modifiable. Early identification of oropharyngeal dysphagia and the management of seizures may help prevent serious respiratory illness. One respiratory hospital admission should trigger further evaluation and management to prevent subsequent respiratory illness. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Jørgensen, Søren; Dau, Torsten
conditions by comparing predictions to measured data from [Kjems et al. (2009). J. Acoust. Soc. Am. 126 (3), 1415-1426] where speech is mixed with four different interferers, including speech-shaped noise, bottle noise, car noise, and cafe noise. The model accounts well for the differences in intelligibility......The speech-based envelope power spectrum model (sEPSM) [Jørgensen and Dau (2011). J. Acoust. Soc. Am., 130 (3), 1475–1487] estimates the envelope signal-to-noise ratio (SNRenv) of distorted speech and accurately describes the speech recognition thresholds (SRT) for normal-hearing listeners...... observed for the different interferers. None of the standardized models successfully describe these data....
Kaminuma, Takuya; Karasawa, Katsuyuki; Hanyu, Nahoko
Recently, the number of human immunodeficiency virus (HIV)-positive patients has increased in Japan. HIV-positive patients are at a higher risk of cancer than the general population. This paper retrospectively reports the acute adverse effects of radiation therapy on HIV-positive patients who were treated at Tokyo Metropolitan Cancer and Infectious diseases Center Komagome Hospital (TMCICK). Thirty-one cases involving 24 HIV-positive cancer patients who were treated at TMCICK from January 1997 to March 2009 were included in this study. All acute adverse effects of radiation therapy were examined during, and one month after, the last radiation therapy session. Acute adverse effects were classified according to the site of radiation therapy treatment and analyzed using the Common Terminology Criteria for Adverse Events (CTCAE) version 3.0. Grade 3 acute adverse effects were seen in 17% of cases, and Grade 2 toxicities were found in 23% of patients. Damage to the skin and mucosa, including stomatitis or diarrhea, tended to occur after low-dose radiation therapy; however, no severe acute adverse effects were seen in other organs, such as the brain, lung, and bone. Acute adverse effects tended to occur earlier in HIV-positive patients and became severe more frequently than in the general population. In particular, disorders of the mucosa, such as those of the oral cavity, pharynx, and intestine, tended to occur rapidly. It was shown that radiation therapy is safe when treatment is performed carefully and that it is a very useful treatment for cancer in HIV-positive patients. (author)
Kosztin, Annamaria; Costa, Jason; Moss, Arthur J; Biton, Yitschak; Nagy, Vivien Klaudia; Solomon, Scott D; Geller, Laszlo; McNitt, Scott; Polonsky, Bronislava; Merkely, Bela; Kutyifa, Valentina
There are limited data on whether clinical presentation at first heart failure (HF) hospitalization predicts recurrent HF events. We aimed to assess predictors of recurrent HF hospitalizations in mild HF patients with an implantable cardioverter defibrillator or cardiac resynchronization therapy with defibrillator. Data on HF hospitalizations were prospectively collected for patients enrolled in MADIT-CRT. Predictors of recurrent HF hospitalization (HF2) after the first HF hospitalization were assessed using Cox proportional hazards regression models including baseline covariates and clinical presentation or management at first HF hospitalization. There were 193 patients with first HF hospitalization, and 156 patients with recurrent HF events. Recurrent HF rate after the first HF hospitalization was 43% at 1 year, 52% at 2 years, and 55% at 2.5 years. Clinical signs and symptoms, medical treatment, or clinical management of HF at first HF admission was not predictive for HF2. Baseline covariates predicting recurrent HF hospitalization included prior HF hospitalization (HR = 1.59, 95% CI: 1.15-2.20, P = 0.005), digitalis therapy (HR = 1.58, 95% CI: 1.13-2.20, P = 0.008), and left ventricular end-diastolic volume >240 mL (HR = 1.62, 95% CI: 1.17-2.25, P = 0.004). Recurrent HF events are frequent following the first HF hospitalization in patients with implanted implantable cardioverter defibrillator or cardiac resynchronization therapy with defibrillator. Neither clinical presentation nor clinical management during first HF admission was predictive of recurrent HF. Prior HF hospitalization, digitalis therapy, and left ventricular end-diastolic volume at enrolment predicted recurrent HF hospitalization, and these covariates could be used as surrogate markers for identifying a high-risk cohort. © 2017 The Authors. ESC Heart Failure published by John Wiley & Sons Ltd on behalf of the European Society of Cardiology.
Full Text Available Various biological factors have been implicated in convulsive seizures, involving side effects of drugs. For the preclinical safety assessment of drug development, it is difficult to predict seizure-inducing side effects. Here, we introduced a machine learning-based in vitro system designed to detect seizure-inducing side effects. We recorded local field potentials from the CA1 alveus in acute mouse neocortico-hippocampal slices, while 14 drugs were bath-perfused at 5 different concentrations each. For each experimental condition, we collected seizure-like neuronal activity and merged their waveforms as one graphic image, which was further converted into a feature vector using Caffe, an open framework for deep learning. In the space of the first two principal components, the support vector machine completely separated the vectors (i.e., doses of individual drugs that induced seizure-like events and identified diphenhydramine, enoxacin, strychnine and theophylline as “seizure-inducing” drugs, which indeed were reported to induce seizures in clinical situations. Thus, this artificial intelligence-based classification may provide a new platform to detect the seizure-inducing side effects of preclinical drugs.
Easton, Scott D
Child sexual abuse (CSA) can have a profound effect on the long-term mental health of boys/men. However, not all men with histories of CSA experience psychopathology. To improve prevention and intervention services, more research is needed to understand why some male survivors experience mental health problems and others do not. The purpose of this study was to examine factors related to mental distress among a large, non-clinical sample of men with histories of CSA (N=487). Using a cross-sectional design with purposive sampling from three national survivor organizations, data were collected through an anonymous Internet-based survey. Multivariate analyses found that only one of the four CSA severity variables-use of physical force by the abuser-was related to mental distress. Additional factors that were related to mental distress included the number of other childhood adversities, years until disclosure, overall response to disclosure, and conformity to masculine norms. Overall, the final model predicted 36% of the variance in the number of mental health symptoms. Mental health practitioners should include masculine norms, disclosure history, and childhood adversities in assessments and intervention planning with male survivors. To more fully explicate risk factors for psychopathology in this population, future studies with probability samples of men that focus on mediational processes and use longitudinal designs are needed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hansen, Tina; Lambert, Heather; Faber, Jens
Purpose : To examine the relationship between ingestive skill performance while eating and drinking and frailty status in acutely-hospitalized elderly patients and to examine whether there is a relationship between the proportion of ingestive skill difficulties and Length of Hospital Stay (LOS) a...
Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J.; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M. E. (Bette); Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M.; Whelan, Maurice
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework has emerged as a systematic approach for organizing knowledge that supports such inference. We argue that this systematic organization of knowledge can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment.
Full Text Available OBJETIVO: A ocorrência de agravos provocados por medicamentos no meio hospitalar é elevada e gera custos excedentes. O objetivo do estudo foi identificar problemas relacionados a medicamentos ocorridos durante a internação hospitalar e estimar a prevalência desses agravos. MÉTODOS: Estudo retrospectivo realizado no Estado do Rio de Janeiro. Foram analisadas as internações pagas pelo Sistema Único de Saúde entre 1999 e 2002. Os dados foram extraídos do Sistema de Informações Hospitalares. Selecionaram-se as internações que apresentaram um dos códigos da CID-10 (2000 suspeitos de serem agravos provocados por medicamentos, que estivessem nos campos do diagnóstico principal e/ou do diagnóstico secundário. Para as variáveis contínuas estimou-se a média, e o desvio-padrão, sendo a significância estatística entre as diferenças testada por meio de análise de variância (ANOVA, com intervalo de confiança de 95%. RESULTADOS: Foram identificados 3.421 casos equivalentes à freqüência de 1,8 casos/1.000 internações, ocorridos, sobretudo, em homens (64,5%, nos hospitais contratados (34,9% e nos municipais (23,1%, nos leitos de psiquiatria (51,4% e de clínica médica (45,2%, dos quais 84,1% resultaram em alta. A maioria dos agravos foi por reações adversas e de intoxicações e, entre essas categorias, há diferenças significativas (pOBJECTIVE: The occurrence of drug adverse events in hospital settings is high and generates cost excess. The purpose of the study was to identify drug-related events during hospital admissions and to estimate their prevalence. METHODS: A retrospective study was carried out in the State of Rio de Janeiro, Southeastern Brazil. Hospitalizations from the Brazilian Health System's national hospital database during the period between 1999 and 2002 were assessed. Admitted cases including suspected drug adverse event cases with ICD-10 (2000 coding in the main diagnosis and/or secondary diagnosis fields
Xu, Duo; Zhao, Ruo-Chi; Gao, Wen-Hui; Cui, Han-Bin
Myocarditis is an inflammatory disease of the myocardium that may lead to cardiac death in some patients. However, little is known about the predictors of in-hospital mortality in patients with suspected myocarditis. Thus, the aim of this study was to identify the independent risk factors for in-hospital mortality in patients with suspected myocarditis by establishing a risk prediction model. A retrospective study was performed to analyze the clinical medical records of 403 consecutive patients with suspected myocarditis who were admitted to Ningbo First Hospital between January 2003 and December 2013. A total of 238 males (59%) and 165 females (41%) were enrolled in this study. We divided the above patients into two subgroups (survival and nonsurvival), according to their clinical in-hospital outcomes. To maximize the effectiveness of the prediction model, we first identified the potential risk factors for in-hospital mortality among patients with suspected myocarditis, based on data pertaining to previously established risk factors and basic patient characteristics. We subsequently established a regression model for predicting in-hospital mortality using univariate and multivariate logistic regression analyses. Finally, we identified the independent risk factors for in-hospital mortality using our risk prediction model. The following prediction model for in-hospital mortality in patients with suspected myocarditis, including creatinine clearance rate (Ccr), age, ventricular tachycardia (VT), New York Heart Association (NYHA) classification, gender and cardiac troponin T (cTnT), was established in the study: P = ea/(1 + ea) (where e is the exponential function, P is the probability of in-hospital death, and a = -7.34 + 2.99 × [Ccr model demonstrated that a Ccr prediction model for in-hospital mortality in patients with suspected myocarditis. In addition, sufficient life support during the early stage of the disease might improve the prognoses of patients with
Full Text Available TUS findings of fluid bronchogram, multifocal involvement, and pleural effusion were associated with adverse outcomes, including longer hospital stay, ICU admission, and tube thoracotomy in hospitalized CAP children. Therefore, TUS is a novel tool for prognostic stratifications of CAP in hospitalized children.
Previous studies have not demonstrated a consistent association between potentially inappropriate medicines (PIMs) in older patients as defined by Beers criteria and avoidable adverse drug events (ADEs). This study aimed to assess whether PIMs defined by new STOPP (Screening Tool of Older Persons\\' potentially inappropriate Prescriptions) criteria are significantly associated with ADEs in older people with acute illness.
Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M E Bette; Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M; Whelan, Maurice
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework provides a systematic approach for organizing knowledge that may support such inference. Likewise, computational models of biological systems at various scales provide another means and platform to integrate current biological understanding to facilitate inference and extrapolation. We argue that the systematic organization of knowledge into AOP frameworks can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. This concept was explored as part of a workshop on AOP-Informed Predictive Modeling Approaches for Regulatory Toxicology held September 24-25, 2015. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development is described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology.
Gao, L; Chen, Y D; Shi, Y J; Xue, H; Wang, J L
To investigate the prediction value of deceleration capacity of rate (DC) and GRACE risk score for cardiovascular events in AMI patients. Consecutive AMI patients with sinus rhythm hospitalized in our department during August 2012 to August 2013 were included in this prospective study. 24-hour ECG Holter monitoring was performed within 1 week, and the DC value was analyzed, GRACE risk score was acquired with the application of GRACE risk score calculator. Patients were followed up for more than 1 year and major adverse cardiac events (MACE) were obtained. Analysised the Kaplan Meier survival according to DC and GRACE score risk stratification respectively. A total of 157 patients were enrolled in the study (average age: (58.9±12.7)years old). The average follow-up was (20.54±2.85) months. Mortality during follow-up was significantly higher in patients with DC>2.5 compared to patients with DC≤2.5 (Prisk stratification was 0.898 (95%CI 0.840-0.940, Prisk stratification was 0.786 (95%CI 0.714-0.847, Prisk stratification was 0.708 (95%CI 0.652-0.769, Prisk patients than those with intermediate and low risk patients according to DC risk stratification in intermediate and low risk patients by GRACE risk stratification (Prisk stratification is superior to GRACE risk score on outcome assessment in this AMI patient cohort.
Conclusions: The SAPS 3 score system exhibited satisfactory performance even superior to APACHE II in discrimination. In predicting hospital mortality, SAPS 3 did not exhibit good calibration and overestimated hospital mortality, which demonstrated that SAPS 3 needs improvement in the future.
Monhart, Z.; Reissigová, Jindra; Zvárová, Jana; Grünfeldová, H.; Janský, P.; Vojáček, J.; Widimský, P.
Roč. 1, č. 1 (2013), s. 52-52 ISSN 1805-8698. [EFMI 2013 Special Topic Conference. 17.04.2013-19.04.2013, Prague] Institutional support: RVO:67985807 Keywords : acute coronary syndrome * in-hospital death * prediction * multilevel logistic regression * non- PCI hospital Subject RIV: IN - Informatics, Computer Science
McCoy, Thomas H; Hart, Kamber L; Perlis, Roy H
To better understand variation in reported rates of delirium, this study characterized delirium occurrence rate by department of service and primary admitting diagnosis. Nine consecutive years (2005-2013) of general hospital admissions (N=831,348) were identified across two academic medical centers using electronic health records. The primary admitting diagnosis and the treating clinical department were used to calculate occurrence rates of a previously published delirium definition composed of billing codes and natural language processing of discharge summaries. Delirium rates varied significantly across both admitting diagnosis group (X 2 10 =12786, pdelirium (86/109764; 0.08%) and neurological admissions the greatest (2851/25450; 11.2%). Although the rate of delirium varied across the two hospitals the relative rates within departments (r=0.96, pdelirium varies significantly across admitting diagnosis and hospital department. Both admitting diagnosis and department of care are even stronger predictors of risk than age; as such, simple risk stratification may offer avenues for targeted prevention and treatment efforts. Copyright © 2017 Elsevier Inc. All rights reserved.
Lian Leng Low
Full Text Available To reduce readmissions, it may be cost-effective to consider risk stratification, with targeting intervention programs to patients at high risk of readmissions. In this study, we aimed to derive and validate a prediction model including several novel markers of hospitalization severity, and compare the model with the LACE index (Length of stay, Acuity of admission, Charlson comorbidity index, Emergency department visits in past 6 months, an established risk stratification tool.This was a retrospective cohort study of all patients ≥ 21 years of age, who were admitted to a tertiary hospital in Singapore from January 1, 2013 through May 31, 2015. Data were extracted from the hospital's electronic health records. The outcome was defined as unplanned readmissions within 30 days of discharge from the index hospitalization. Candidate predictive variables were broadly grouped into five categories: Patient demographics, social determinants of health, past healthcare utilization, medical comorbidities, and markers of hospitalization severity. Multivariable logistic regression was used to predict the outcome, and receiver operating characteristic analysis was performed to compare our model with the LACE index.74,102 cases were enrolled for analysis. Of these, 11,492 patient cases (15.5% were readmitted within 30 days of discharge. A total of fifteen predictive variables were strongly associated with the risk of 30-day readmissions, including number of emergency department visits in the past 6 months, Charlson Comorbidity Index, markers of hospitalization severity such as 'requiring inpatient dialysis during index admission, and 'treatment with intravenous furosemide 40 milligrams or more' during index admission. Our predictive model outperformed the LACE index by achieving larger area under the curve values: 0.78 (95% confidence interval [CI]: 0.77-0.79 versus 0.70 (95% CI: 0.69-0.71.Several factors are important for the risk of 30-day readmissions
Fatemeh Saheb Sharif-Askari
Full Text Available BACKGROUND: Anticoagulation therapy is usually required in patients with chronic kidney disease (CKD for treatment or prevention of thromboembolic diseases. However, this benefit could easily be offset by the risk of bleeding. OBJECTIVES: To determine the incidence of adverse outcomes of anticoagulants in hospitalized patients with CKD, and to compare the rates of major bleeding events between the unfractionated heparin (UFH and enoxaparin users. METHODS: One year prospective observational study was conducted in patients with CKD stages 3 to 5 (estimated GFR, 10-59 ml/min/1.73 m(2 who were admitted to the renal unit of Dubai Hospital. Propensity scores for the use of anticoagulants, estimated for each of the 488 patients, were used to identify a cohort of 117 pairs of patients. Cox regression method was used to estimate association between anticoagulant use and adverse outcomes. RESULTS: Major bleeding occurred in 1 in 3 patients who received anticoagulation during hospitalization (hazard ratio [HR], 4.61 [95% confidence interval [CI], 2.05-10.35]. Compared with enoxaparin users, patients who received anticoagulation with unfractionated heparin had a lower mean [SD] serum level of platelet counts (139.95  × 10(3/µL vs 205.56  × 10(3/µL; P<0.001, and had a higher risk of major bleeding (HR, 4.79 [95% CI, 1.85-12.36]. Furthermore, compared with those who did not receive anticoagulants, patients who did had a higher in-hospital mortality (HR, 2.54 [95% CI, 1.03-6.25]; longer length of hospitalization (HR, 1.04 [95% CI, 1.01-1.06]; and higher hospital readmission at 30 days (HR, 1.79 [95% CI, 1.10-2.91]. CONCLUSIONS: Anticoagulation among hospitalized patients with CKD was significantly associated with an increased risk of bleeding and in-hospital mortality. Hence, intensive monitoring and preventive measures such as laboratory monitoring and/or dose adjustment are warranted.
Yamamoto, Yoshiaki; Tsunedomi, Ryouichi; Fujita, Yusuke; Otori, Toru; Ohba, Mitsuyoshi; Kawai, Yoshihisa; Hirata, Hiroshi; Matsumoto, Hiroaki; Haginaka, Jun; Suzuki, Shigeo; Dahiya, Rajvir; Hamamoto, Yoshihiko; Matsuyama, Kenji; Hazama, Shoichi; Nagano, Hiroaki; Matsuyama, Hideyasu
We investigated the relationship between axitinib pharmacogenetics and clinical efficacy/adverse events in advanced renal cell carcinoma (RCC) and established a model to predict clinical efficacy and adverse events using pharmacokinetic and gene polymorphisms related to drug metabolism and efflux in a phase II trial. We prospectively evaluated the area under the plasma concentration-time curve (AUC) of axitinib, objective response rate, and adverse events in 44 consecutive advanced RCC patients treated with axitinib. To establish a model for predicting clinical efficacy and adverse events, polymorphisms in genes including ABC transporters ( ABCB1 and ABCG2 ), UGT1A , and OR2B11 were analyzed by whole-exome sequencing, Sanger sequencing, and DNA microarray. To validate this prediction model, calculated AUC by 6 gene polymorphisms was compared with actual AUC in 16 additional consecutive patients prospectively. Actual AUC significantly correlated with the objective response rate ( P = 0.0002) and adverse events (hand-foot syndrome, P = 0.0055; and hypothyroidism, P = 0.0381). Calculated AUC significantly correlated with actual AUC ( P treatment precisely predicted actual AUC after axitinib treatment ( P = 0.0066). Our pharmacogenetics-based AUC prediction model may determine the optimal initial dose of axitinib, and thus facilitate better treatment of patients with advanced RCC.
-1584C and 2850C>T was significantly higher among those with ultrasound-diagnosed fatty liver following the commencement of tamoxifen therapy (P=0.029. Adverse effects occurred at a significantly higher frequency among postmenopausal women (P=0.041. Three patients who developed recurrence of breast cancer had no association with SNPs tested. Conclusions: CYP2D6 SNP combination 2988G>A, -1584C and 2850C>T, strongly suggestive of *41 reduced functional allele, is likely to be useful in predicting occurrence of adverse effect fatty liver in breast cancer patients on tamoxifen, thereby alternative treatment can be considered and lifestyle modifications implemented. Larger sample studies are recommended with the measurement of tamoxifen and metabolite levels. Alternative therapy should be considered for postmenopausal patients. Keywords: fatty-liver, 2988G>A, CYP2D6*41, intermediate-metabolizer, SNP
Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P
The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Shimada, Yutaka; Fujimoto, Makoto; Nogami, Tatsuya; Watari, Hidetoshi; Kitahara, Hideyuki; Misawa, Hiroki; Kimbara, Yoshiyuki
Kampo medicine is traditional Japanese medicine, which originated in ancient traditional Chinese medicine, but was introduced and developed uniquely in Japan. Today, Kampo medicines are integrated into the Japanese national health care system. Incident reporting systems are currently being widely used to collect information about patient safety incidents that occur in hospitals. However, no investigations have been conducted regarding patient safety incident reports related to Kampo medicines. The aim of this study was to survey and analyse incident reports related to Kampo medicines in a Japanese university hospital to improve future patient safety. We selected incident reports related to Kampo medicines filed in Toyama University Hospital from May 2007 to April 2017, and investigated them in terms of medication errors and adverse drug events. Out of 21,324 total incident reports filed in the 10-year survey period, we discovered 108 Kampo medicine-related incident reports. However, five cases were redundantly reported; thus, the number of actual incidents was 103. Of those, 99 incidents were classified as medication errors (77 administration errors, 15 dispensing errors, and 7 prescribing errors), and four were adverse drug events, namely Kampo medicine-induced interstitial pneumonia. The Kampo medicine (crude drug) that was thought to induce interstitial pneumonia in all four cases was Scutellariae Radix, which is consistent with past reports. According to the incident severity classification system recommended by the National University Hospital Council of Japan, of the 99 medication errors, 10 incidents were classified as level 0 (an error occurred, but the patient was not affected) and 89 incidents were level 1 (an error occurred that affected the patient, but did not cause harm). Of the four adverse drug events, two incidents were classified as level 2 (patient was transiently harmed, but required no treatment), and two incidents were level 3b (patient was
Tetteh, Raymond A; Nartey, Edmund T; Lartey, Margaret; Mantel-Teeuwisse, Aukje K; Leufkens, Hubert G M; Nortey, Priscilla A; Dodoo, Alexander N O
There is strong evidence that post-exposure prophylaxis (PEP) with antiretroviral drugs in the timely management of occupational exposures sustained by healthcare workers decreases the risk of HIV infection and PEP is now widely used. Antiretroviral drugs have well documented toxicities and produce adverse events in patients living with HIV/AIDS. In the era of "highly active antiretroviral therapy", non-adherence to treatment has been closely linked to the occurrence of adverse events in HIV patients and this ultimately influences treatment success but the influence of adverse events on adherence during PEP is less well studied. Following the introduction of a HIV post-exposure prophylaxis program in the Korle-Bu Teaching Hospital in January 2005, the incidence of adverse events and adherence were documented in occupationally-exposed healthcare workers (HCWs) and healthcare students (HCSs). Cohort event monitoring was used in following-up on exposed HCWs/HCSs for the two study outcomes; adverse events and adherence. All adverse events reported were grouped by MedDRA system organ classification and then by preferred term according to prophylaxis regimen. Adherence was determined by the completion of prophylaxis schedule. Cox proportional regression analysis was applied to determine the factors associated with the cohort study outcomes. Differences in frequencies were tested using the Chi square test and p < 0.05 was considered statistically significant. A total of 228 exposed HCWs/HCSs were followed up during the study, made up of 101 exposed HCWs/HCSs administered lamivudine/zidovudine (3TC/AZT) for 3 days; 75 exposed HCWs/HCSs administered lamivudine/zidovudine (3TC/AZT) for 28 days; and 52 exposed HCWs/HCSs administered lamivudine/zidovudine/lopinavir-ritonavir (3TC/AZT/LPV-RTV) for 28 days. The frequency of adverse events was 28% (n = 28) in exposed HCWs/HCSs administered 3TC/AZT for 3 days, 91% (n = 68) in exposed HCWs/HCSs administered 3TC/AZT for
de Jonge, J.; Mesman, J.A.J.M.; Manniën, J.; Zwart, J.J.; van Dillen, J.; van Roosmalen, J.
Objectives: To test the hypothesis that low risk women at the onset of labour with planned home birth have a higher rate of severe acute maternal morbidity than women with planned hospital birth, and to compare the rate of postpartum haemorrhage and manual removal of placenta. Design: Cohort study
Furman, Mark I; Gore, Joel M; Anderson, Fredrick A; Budaj, Andrzej; Goodman, Shaun G; Avezum, Avaro; López-Sendón, José; Klein, Werner; Mukherjee, Debabrata; Eagle, Kim A; Dabbous, Omar H; Goldberg, Robert J
To examine the association between elevated leukocyte count and hospital mortality and heart failure in patients enrolled in the multinational, observational Global Registry of Acute Coronary Events (GRACE). Elevated leukocyte count is associated with adverse hospital outcomes in patients presenting with acute myocardial infarction (AMI). The association of this prognostic factor with hospital mortality and heart failure in patients with other acute coronary syndromes (ACS) is unclear. We examined the association between admission leukocyte count and hospital mortality and heart failure in 8269 patients presenting with an ACS. This association was examined separately in patients with ST-segment elevation AMI, non-ST-segment elevation AMI, and unstable angina. Leukocyte count was divided into 4 mutually exclusive groups (Q): Q1 12,000. Multiple logistic regression analysis was performed to examine the association between elevated leukocyte count and hospital events while accounting for the simultaneous effect of several potentially confounding variables. Increasing leukocyte count was significantly associated with hospital death (adjusted odds ratio [OR] 2.8, 95% CI 2.1-3.6 for Q4 compared to Q2 [normal range]) and heart failure (OR 2.7, 95% CI 2.2-3.4) for patients presenting with ACS. This association was seen in patients with ST-segment elevation AMI (OR for hospital death 3.2, 95% CI 2.1-4.7; OR for heart failure 2.4, 95% CI 1.8-3.3), non-ST-segment elevation AMI (OR for hospital death 1.9, 95% CI 1.2-3.0; OR for heart failure 1.7, 95% CI 1.1-2.5), or unstable angina (OR for hospital death 2.8, 95% CI 1.4-5.5; OR for heart failure 2.0, 95% CI 0.9-4.4). In men and women of all ages with the spectrum of ACS, initial leukocyte count is an independent predictor of hospital death and the development of heart failure.
Zhang, Yu; Chen, Deng; Xu, Da; Tan, Ge; Liu, Ling
To explore the applicability of the epidemiology-based mortality score in status epilepticus (EMSE) and the status epilepticus severity score (STESS) in predicting hospital mortality in patients with status epilepticus (SE) in western China. Furthermore, we sought to compare the abilities of the two scales to predict mortality from convulsive status epilepticus (CSE) and non-convulsive status epilepticus (NCSE). Patients with epilepsy (n = 253) were recruited from the West China Hospital of Sichuan University from January 2012 to January 2016. The EMSE and STESS for all patients were calculated immediately after admission. The main outcome was in-hospital death. The predicted values were analysed using SPSS 22.0 receiver operating characteristic (ROC) curves. Of the 253 patients with SE who were included in the study, 39 (15.4%) died in the hospital. Using STESS ≥4 points to predict SE mortality, the area under the ROC curve (AUC) was 0.724 (P 0.05), while EMSE ≥90 points gave an AUC of 0.666 (P > 0.05). The hospital mortality rate from SE in this study was 15.4%. Those with STESS ≥4 points or EMSE ≥79 points had higher rates of SE mortality. Both STESS and EMSE are less useful predicting in-hospital mortality in NCSE compared to CSE. Furthermore, the EMSE has some advantages over the STESS. Copyright © 2018 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Barker, Lucy Church; Gruneir, Andrea; Fung, Kinwah; Herrmann, Nathan; Kurdyak, Paul; Lin, Elizabeth; Rochon, Paula A; Seitz, Dallas; Taylor, Valerie H; Vigod, Simone N
Psychiatric readmission is a common negative outcome. Predictors of readmission may differ by sex. This study aimed to derive and internally validate sex-specific models to predict 30-day psychiatric readmission. We used population-level health administrative data to identify predictors of 30-day psychiatric readmission among women (n = 33,353) and men (n = 32,436) discharged from all psychiatric units in Ontario, Canada (2008-2011). Predictor variables included sociodemographics, health service utilization, and clinical characteristics. Using derivation data sets, multivariable logistic regression models were fit to determine optimal predictive models for each sex separately. Results were presented as adjusted odds ratios (aORs) and 95% confidence intervals (CI). The multivariable models were then applied in the internal validation data sets. The 30-day readmission rates were 9.3% (women) and 9.1% (men). Many predictors were consistent between women and men. For women only, personality disorder (aOR 1.21, 95% CI 1.03-1.42) and positive symptom score (aOR 1.41, 95% CI 1.09-1.82 for score of 1 vs. 0; aOR 1.44, 95% CI 1.26-1.64 for ≥ 2 vs. 0) increased odds of readmission. For men only, self-care problems at admission (aOR 1.20, 95% CI 1.06-1.36) and discharge (aOR 1.44, 95% CI 1.26-1.64 for score of 1 vs. 0; aOR 1.79, 95% CI 1.17-2.74 for 2 vs. 0), and mild anxiety rating (score of 1 vs. 0: aOR 1.30, 95% CI 1.02-1.64, derivation model only) increased odds of readmission. Models had moderate discriminative ability in derivation and internal validation samples for both sexes (c-statistics 0.64-0.65). Certain key predictors of psychiatric readmission differ by sex. This knowledge may help to reduce psychiatric hospital readmission rates by focusing interventions.
Ling, Ru; Liu, Jiawang
To construct prediction model for health workforce and hospital beds in county hospitals of Hunan by multiple linear regression. We surveyed 16 counties in Hunan with stratified random sampling according to uniform questionnaires,and multiple linear regression analysis with 20 quotas selected by literature view was done. Independent variables in the multiple linear regression model on medical personnels in county hospitals included the counties' urban residents' income, crude death rate, medical beds, business occupancy, professional equipment value, the number of devices valued above 10 000 yuan, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, and utilization rate of hospital beds. Independent variables in the multiple linear regression model on county hospital beds included the the population of aged 65 and above in the counties, disposable income of urban residents, medical personnel of medical institutions in county area, business occupancy, the total value of professional equipment, fixed assets, long-term debt, medical income, medical expenses, outpatient and emergency visits, hospital visits, actual available bed days, utilization rate of hospital beds, and length of hospitalization. The prediction model shows good explanatory and fitting, and may be used for short- and mid-term forecasting.
Li, Jun; Li, Qian; Zhou, Bei; Gao, Yanli; Ma, Jiehua; Li, Jingyun
Cosmetic surgery is becoming increasingly popular in China. However, reports on the predictive factors for cosmetic surgery in Chinese individuals are scarce in the literature. We retrospectively analyzed 4550 cosmetic surgeries performed from January 2010 to December 2014 at a single center in China. Data collection included patient demographics and type of cosmetic surgery. Predictive factors were age, sex, marital status, occupational status, educational degree, and having had children. Predictive factors for the three major cosmetic surgeries were determined using a logistic regression analysis. Patients aged 19-34 years accounted for the most popular surgical procedures (76.9 %). The most commonly requested procedures were eye surgery, Botox injection, and nevus removal. Logistic regression analysis showed that higher education level (college, P = 0.01, OR 1.21) was predictive for eye surgery. Age (19-34 years, P = 0.00, OR 33.39; 35-50, P = 0.00, OR 31.34; ≥51, P = 0.00, OR 16.42), female sex (P = 0.00, OR 9.19), employment (service occupations, P = 0.00, OR 2.31; non-service occupations, P = 0.00, OR 1.76), and higher education level (college, P = 0.00, OR 1.39) were independent predictive factors for Botox injection. Married status (P = 0.00, OR 1.57), employment (non-service occupations, P = 0.00, OR 1.50), higher education level (masters, P = 0.00, OR 6.61), and having children (P = 0.00, OR 1.45) were independent predictive factors for nevus removal. The principal three cosmetic surgeries (eye surgery, Botox injection, and nevus removal) were associated with multiple variables. Patients employed in non-service occupations were more inclined to undergo Botox injection and nevus removal. Cohort study, Level III.
Bruno Mendonça Coêlho
Full Text Available Childhood adversities have been associated with a number of medical and psychiatric outcomes. However, the reported effects that specific childhood adversities have on suicidality vary across studies.This was a cross-sectional, stratified, multistage area probability investigation of a general population in Brazil, designated the São Paulo Megacity Mental Health Survey. The World Mental Health Composite International Diagnostic Interview was applied in 5037 individuals ≥ 18 years of age, in order to assess 12 different adversities occurring during childhood and/or adolescence, as well as to look for associations between those adversities and subsequent suicidality in different age strata.Over half of the respondents reported at least one childhood adversity. Only physical abuse was consistently associated with suicide attempts in all subsequent life stages (OR = 2.1. Among adults 20-29 years of age, the likelihood of a suicide attempt was correlated with parental divorce, whereas suicidal ideation was associated with prior sexual abuse. Among adults over 30 years of age, physical illness and economic adversity emerged as relevant childhood adversities associated with suicide attempts, whereas sexual abuse, family violence, and economic adversity were associated with suicidal ideation.Childhood adversities, especially physical abuse, are likely associated with unfavorable consequences in subsequent years. For suicidality across a lifespan, the role of different childhood adversities must be examined independently.
Borer, Steven M.; Kokkirala, Aravind; O'Sullivan, David M.; Silverman, David I.
Background Despite intensive investigation, the pathogenesis of heart failure with normal ejection fraction (HFNEF) remains unclear. We hypothesized that subtle abnormalities of systolic function might play a role, and that abnormal systolic strain and strain rate would provide a marker for adverse outcomes. Methods Patients of new CHF and left ventricular ejection fraction > 50% were included. Exclusion criteria were recent myocardial infarction, severe valvular heart disease, severe left ventricular hypertrophy (septum >1.8 cm), or a technically insufficient echocardiogram. Average peak systolic strain and strain rate were measured using an off-line grey scale imaging technique. Systolic strain and strain rate for readmitted patients were compared with those who remained readmission-free. Results One hundred consecutive patients with a 1st admission for HFNEF from January 1, 2004 through December 31, 2007, inclusive, were analyzed. Fifty two patients were readmitted with a primary diagnosis of heart failure. Systolic strain and strain rates were reduced in both study groups compared to controls. However, systolic strain did not differ significantly between the two groups (-11.7% for those readmitted compared with -12.9% for those free from readmission, P = 0.198) and systolic strain rates also were similar (-1.05 s-1 versus -1.09 s-1, P = 0.545). E/e’ was significantly higher in readmitted patients compared with those who remained free from readmission (14.5 versus 11.0, P = 0.013). E/e’ (OR 1.189, 95% CI 1.026-1.378; P = 0.021) was found to be an independent predictor for HFNEF readmission. Conclusions Among patients with new onset HFNEF, SS and SR rates are reduced compared with patients free of HFNEF, but do not predict hospital readmission. Elevated E/e’ is a predictor of readmission in these patients. PMID:28352395
Full Text Available A retrospective study was conducted in Department of pediatrics SCB Medical College and SVPPGIP for a period of 2 years i.e. September 2012 to August 2014 . All the patients from birth to 14 years admitted to the pediatric ward in this study were under ADR surveillance. Patients admitted to our hospital with adverse drug reaction o r patients developing adverse drug reaction in our hospital were studied; only those cases where the central nervous system was involved were taken in our study. The cases were compiled and the causality of offending drugs was found using WHO - UMC causality assessment score. The severity of drug reaction in every case was determined by using HARTWIG’s severity scoring scale. Total 350 Adverse reactions were reported in this period with prevalence rate of 2.04% i.e. 20 out of 1000 children faced ADR due to dr ugs, with annual incidence rate of 0.9% and 1.14% over two years. Out of total 350 cases dermatological system was most commonly involved i.e. 207 cases (59.14%. This is followed by involvement of central nervous system 46 number of cases (13.14%. The GI system was involved in 34 cases i.e. (9.71%. Life threatening reactions like anaphylaxis, angioedema and shock like immediate life threatening ADRs were reported in 16 cases. Our study group was the patient in whom the ADR involved the CNS. Out of 46 suc h cases, there were 25 female and 21 male. Various reaction due to drug were encephalopathy , eps, febrile seizure, tremor, head reeling, ototoxicity, persistant cry, pseudotumor cerebri, psychosis, seizure, status epilepticus, toxic amblyopia, tremor, atax ia etc. The most common CNS manifestation was Extra pyramidal side effects (EPS involving 21% of cases. The most common Drug causing CNS manifestation was ATT (HRZE causing blindness, Eps, psychosis , toxic amblyopia blindness etc.
Duncan, Ian; Huynh, Nhan
Predictive models for hospital readmission rates are in high demand because of the Centers for Medicare & Medicaid Services (CMS) Hospital Readmission Reduction Program (HRRP). The LACE index is one of the most popular predictive tools among hospitals in the United States. The LACE index is a simple tool with 4 parameters: Length of stay, Acuity of admission, Comorbidity, and Emergency visits in the previous 6 months. The authors applied logistic regression to develop a predictive model for a medium-sized not-for-profit community hospital in California using patient-level data with more specific patient information (including 13 explanatory variables). Specifically, the logistic regression is applied to 2 populations: a general population including all patients and the specific group of patients targeted by the CMS penalty (characterized as ages 65 or older with select conditions). The 2 resulting logistic regression models have a higher sensitivity rate compared to the sensitivity of the LACE index. The C statistic values of the model applied to both populations demonstrate moderate levels of predictive power. The authors also build an economic model to demonstrate the potential financial impact of the use of the model for targeting high-risk patients in a sample hospital and demonstrate that, on balance, whether the hospital gains or loses from reducing readmissions depends on its margin and the extent of its readmission penalties.
Xie, Yang; Schreier, Günter; Chang, David C W; Neubauer, Sandra; Redmond, Stephen J; Lovell, Nigel H
Healthcare administrators worldwide are striving to both lower the cost of care whilst improving the quality of care given. Therefore, better clinical and administrative decision making is needed to improve these issues. Anticipating outcomes such as number of hospitalization days could contribute to addressing this problem. In this paper, a method was developed, using large-scale health insurance claims data, to predict the number of hospitalization days in a population. We utilized a regression decision tree algorithm, along with insurance claim data from 300,000 individuals over three years, to provide predictions of number of days in hospital in the third year, based on medical admissions and claims data from the first two years. Our method performs well in the general population. For the population aged 65 years and over, the predictive model significantly improves predictions over a baseline method (predicting a constant number of days for each patient), and achieved a specificity of 70.20% and sensitivity of 75.69% in classifying these subjects into two categories of 'no hospitalization' and 'at least one day in hospital'.
Good, V S; Saldaña, M; Gilder, R; Nicewander, D; Kennerly, D A
The Institute for Healthcare Improvement encourages use of the Global Trigger Tool to objectively determine and monitor adverse events (AEs). Baylor Health Care System (BHCS) is an integrated healthcare delivery system in North Texas. The Global Trigger Tool was applied to BHCS's eight general acute care hospitals, two inpatient cardiovascular hospitals and two rehabilitation/long-term acute care hospitals. Data were collected from a monthly random sample of charts for each facility for patients discharged between 1 July 2006 and 30 June 2007 by external professional nurse auditors using an MS Access Tool developed for this initiative. In addition to the data elements recommended by Institute for Healthcare Improvement, BHCS developed fields to permit further characterisation of AEs to identify learning opportunities. A structured narrative description of each identified AE facilitated text mining to further characterise AEs. INITIAL FINDINGS: Based on this sample, AE rates were found to be 68.1 per 1000 patient days, or 50.8 per 100 encounters, and 39.8% of admissions were found to have ≥1 AE. Of all AEs identified, 61.2% were hospital-acquired, 10.1% of which were associated with a National Coordinating Council - Medical Error Reporting and Prevention harm score of "H or I" (near death or death). To enhance learning opportunities and guide quality improvement, BHCS collected data-such as preventability and AE source-to characterise the nature of AEs. Data are provided regularly to hospital teams to direct quality initiatives, moving from a general focus on reducing AEs to more specific programmes based on patterns of harm and preventability.
Taniguchi, Tatsunori; Ohtani, Tomohito; Kioka, Hidetaka; Tsukamoto, Yasumasa; Onishi, Toshinari; Nakamoto, Kei; Katsimichas, Themistoklis; Sengoku, Kaoruko; Chimura, Misato; Hashimoto, Haruko; Yamaguchi, Osamu; Sawa, Yoshiki; Sakata, Yasushi
This study sought to investigate whether elevated liver stiffness (LS) values at discharge reflect residual liver congestion and are associated with worse outcomes in patients with heart failure (HF). Transient elastography is a newly developed, noninvasive method for assessing LS, which can be highly reflective of right-sided filling pressure associated with passive liver congestion in patients with HF. LS values were determined for 171 hospitalized patients with HF before discharge using a Fibroscan device. The median LS value was 5.6 kPa (interquartile range: 4.4 to 8.1; range 2.4 to 39.7) and that of right-sided filling pressure, which was estimated based on LS, was 5.7 mm Hg (interquartile range: 4.1 to 8.2 mm Hg; range 0.1 to 18.9 mm Hg). The patients in the highest LS tertile (>6.9 kPa, corresponding to an estimated right-sided filling pressure of >7.1 mm Hg) had advanced New York Heart Association functional class, high prevalence of jugular venous distention and moderate/severe tricuspid regurgitation, large inferior vena cava (IVC) diameter, low hemoglobin and hematocrit levels, high serum direct bilirubin level, and a similar left ventricular ejection fraction compared with the lower tertiles. During follow-up periods (median: 203 days), 8 (5%) deaths and 33 (19%) hospitalizations for HF were observed. The patients in the highest LS group had a significantly higher mortality rate and HF rehospitalization (hazard ratio: 3.57; 95% confidence interval: 1.93 to 6.83; p direct bilirubin and brain natriuretic peptide levels, LS values were predictive of worse outcomes, even after adjustment for these indices. These data suggest that LS is a useful index for assessing systemic volume status and predicting the severity of HF, and that the presence of liver congestion at discharge is associated with worse outcomes in patients with HF. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Mariana de Oliveira Brizeno de Souza
Full Text Available OBJECTIVES: follow-up of children exposed to oxacillin during hospitalization focusing on adverse reactions. METHODS: patients were selected from the pediatric wards of two hospitals in Fortaleza (Hospital Universitário Walter Cantídio-HUWC and Hospital Infantil Albert Sabin-HIAS from the first oxacillin prescription with a prospective cohort study between October, 2000 and July, 2001 (HUWC and July/2001 and March, 2002 (HIAS. Patients' follow-up was performed by daily visits to the wards and medical charts and prescription analysis. Suspected oxacillininduced adverse reactions (OxAR cases were notified and classified according to causality and severity. Related statistic tests were completed. RESULTS: of the 130 patients exposed to oxacillin, 27 had OxAR (20.8%. Fever was the most frequent reaction (50% followed by rash (35.7%. The majority of reactions were considered Probable, for oxacillin was the only medication involved and 92.6% of the cases had Moderate severity with the need of therapeutic interventions caused by OxAR. A significant relation between oxacillin exposure time and OxAR was determined as well as hospitalization time and the appearance of adverse reactions. Exposure time over 14 days to oxacillin was established as a risk factor for OxAR (relative risk = 5.49. CONCLUSIONS: careful administration of oxacillin in children is recommended with established treatment duration. Empiric and prolonged use must be avoided.OBJETIVOS: acompanhar crianças expostas à oxacilina durante hospitalização, com foco na incidência de reações adversas. MÉTODOS: os pacientes foram selecionados em enfermarias pediátricas de dois hospitais de Fortaleza (Hospital Universitário Walter Cantídio-HUWC e Hospital Infantil Albert Sabin-HIAS, desde a primeira prescrição de oxacilina, sendo feita coorte prospectiva entre outubro, 2000 e julho, 2001 (HUWC, e entre julho,2001 e março,2002 (HIAS. O seguimento de pacientes deu-se através de
Pivatto Junior, Fernando; Scheffel, Rafael Selbach; Ries, Lucas; Wolkind, Ricardo Roitman; Marobin, Roberta; Barkan, Sabrina Sigal; Amon, Luís Carlos; Biolo, Andréia
The SAMe-TT2R2 score was developed to predict which patients on oral anticoagulation with vitamin K antagonists (VKAs) will reach an adequate time in therapeutic range (TTR) (> 65%-70%). Studies have reported a relationship between this score and the occurrence of adverse events. To describe the TTR according to the score, in addition to relating the score obtained with the occurrence of adverse events in patients with nonvalvular atrial fibrillation (AF) on oral anticoagulation with VKAs. Retrospective cohort study including patients with nonvalvular AF attending an outpatient anticoagulation clinic of a tertiary hospital. Visits to the outpatient clinic and emergency, as well as hospital admissions to the institution, during 2014 were evaluated. The TTR was calculated through the Rosendaal´s method. We analyzed 263 patients (median TTR, 62.5%). The low-risk group (score 0-1) had a better median TTR as compared with the high-risk group (score ≥ 2): 69.2% vs. 56.3%, p = 0.002. Similarly, the percentage of patients with TTR ≥ 60%, 65% or 70% was higher in the low-risk group (p vitamina K (AVKs) atingirão um tempo na faixa terapêutica (TFT) adequado (> 65%-70%) no seguimento. Estudos também o relacionaram com a ocorrência de eventos adversos. Descrever o TFT de acordo com o escore, além de relacionar a pontuação obtida com a ocorrência de eventos adversos adversos em pacientes com fibrilação atrial (FA) não valvar em anticoagulação oral com AVKs. Estudo de coorte retrospectivo incluindo pacientes com FA não valvar em acompanhamento em ambulatório de anticoagulação de um hospital terciário. Foi realizada uma avaliação retrospectiva de consultas ambulatoriais, visitas a emergência e internações hospitalares na instituição no período de janeiro-dezembro/2014. O TFT foi calculado aplicando-se o método de Rosendaal. Foram analisados 263 pacientes com TFT mediano de 62,5%. O grupo de baixo risco (0-1 ponto) obteve um TFT mediano maior em
Zaka Un Nisa
Full Text Available Adverse Drug Reactions (ADRs underreporting is a great challenge to pharmacovigilance. Healthcare professionals should consider ADR reporting as their professional obligation because the effective system of ADR reporting is important to improve patient care and safety. This study was designed to assess the knowledge, attitude, practice and factors associated with ADR reporting by healthcare professionals (physicians and pharmacists in secondary and tertiary hospitals of Islamabad. A pretested questionnaire comprising of 27 questions (knowledge 12, attitude 4, practice 9 and factors influencing ADR reporting 2 was administered to 384 physicians and pharmacists in public and private hospitals. Respondents were evaluated for their knowledge, attitude and practice related to ADR reporting. Additionally, the factors which encourage and discourage respondents to report ADRs were also determined. The data was analysed by using SPSS statistical software. Among 384 respondents, 367 provided responses to questionnaire, giving a response rate of 95.5%. The mean age was 28.3 (SD = 6.7. Most of the respondents indicated poor ADR reporting knowledge (83.1%. The majority of respondents (78.2% presented a positive attitude towards ADR reporting and only a few (12.3% hospitals have good ADR reporting practice. The seriousness of ADR, unusualness of reaction, new drug involvement and confidence in the diagnosis of ADR are the factors which encourage respondents to report ADR whereas lack of knowledge regarding where and how to report ADR, lack of access to ADR reporting form, managing patient is more important than reporting ADR legal liability issues were the major factors which discourage respondents to report ADR. The study reveals poor knowledge and practice regarding ADR reporting. However, most of the respondents have shown a positive attitude towards ADR reporting. There is a serious need for educational training as well as sincere and sustained
Can adverse maternal and perinatal outcomes be predicted when blood pressure becomes elevated? Secondary analyses from the CHIPS (Control of Hypertension In Pregnancy Study) randomized controlled trial
Magee, Laura A.; von Dadelszen, Peter; Singer, Joel; Lee, Terry; Rey, Evelyne; Ross, Susan; Asztalos, Elizabeth; Murphy, Kellie E.; Menzies, Jennifer; Sanchez, Johanna; Gafni, Amiram; Gruslin, Andrée; Helewa, Michael; Hutton, Eileen; Lee, Shoo K.; Logan, Alexander G.; Ganzevoort, Wessel; Welch, Ross; Thornton, Jim G.; Moutquin, Jean Marie
Introduction. For women with chronic or gestational hypertension in CHIPS (Control of Hypertension In Pregnancy Study, NCT01192412), we aimed to examine whether clinical predictors collected at randomization could predict adverse outcomes. Material and methods. This was a planned, secondary analysis
The aim of the Prospective Observational Trial to Optimize Pediatric Health in IUGR (PORTO) Study was to evaluate the optimal management of fetuses with estimated fetal weight (EFW) <10(th) centile. The objective of this secondary analysis was to describe the role of the cerebroplacental ratio (CPR) in the prediction of adverse perinatal outcome.
Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe
An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.
Leegon, Jeffrey; Jones, Ian; Lanaghan, Kevin; Aronsky, Dominik
Hospital admission delays in the Emergency Department (ED) reduce volume capacity and contribute to the nation’s ED diversion problem. This study evaluated the accuracy of a Bayesian network for the early prediction of hospital admission status using data from 16,900 ED encounters. The final model included nine variables that are commonly available in many ED settings. The area under the receiver operating characteristic curve was 0.894 (95% CI: 0.887-0.902) for the validati...
Di Giorgio, Marina; Vallerga, Maria B.; Radl, Analia; Sardi, Mabel
Full text: Around 5 % -7 % of cancer patients develop adverse side effects, which include acute effects, late effects and cancer induction to radiation therapy in normal tissues in the treatment field. Such effects are of particular interest as the cancer patient population that reaches prolonged survival has increased with the improvements in cancer therapy and health care. These adverse reactions are mainly influenced by deficiencies in DNA repair pathways. However, tissue response to IR could be modified by several treatment- and patient- related factors. Numerous studies have been carried out to evaluate the correlation between clinical and cellular radiosensitivity, by in vitro tests. Previous own studies, characterizing DNA repair capacity in peripheral lymphocytes of cancer patients through cytokinesis blocked micronucleus test and alkaline single-cell microgel electrophoresis (comet), indicated that such assays correlated with the clinical radiation signs of radiosensitivity and showed the predictive potential of both techniques in the identification of radiosensitivity subgroups. In this paper, retrospective studies are conducted in 10 representative cases, which had developed acute or late toxicity in previous treatments and at present require new radiation treatments due to secondary malignancies or recurrence. Samples were in vitro irradiated with 2 Gy. MN data were analyzed comparing expected MN frequencies with values observed after in vitro irradiation. DNA repair capacity was evaluated through comet assay for initial damage and after specific times of repair (0-120 minutes). Captured images were analyzed by CASP image analysis software. Repair capacity was quantified by the Olive tail moment. Weibull alpha parameter was applied to describe DNA damage at the different evaluated repair times after in vitro irradiation and fitted by a mono-exponential model to describe the kinetic profile. In every evaluated patient a correlation between mean half
Di Giorgio, M.; Vallerga, M.B.; Radl, A.; Sardi, M.
Around 5%-7% of cancer patients develop adverse side effects, which include acute effects, late effects and cancer induction to radiation therapy in normal tissues in the treatment field. Such effects are of particular interest as the cancer patient population that reaches prolonged survival has increased with the improvements in cancer therapy and health care. These adverse reactions are mainly influenced by deficiencies in DNA repair pathways. However, tissue response to IR could be modified by several treatment- and patient- related factors. Numerous studies have been carried out to evaluate the correlation between clinical and cellular radiosensitivity, by in vitro tests. Previous own studies, characterizing DNA repair capacity in peripheral lymphocytes of cancer patients through cytokinesis blocked micronucleus test and alkaline single-cell microgel electrophoresis (comet), indicated that such assays correlated with the clinical radiation signs of radiosensitivity and showed the predictive potential of both techniques in the identification of radiosensitivity subgroups. In this paper, retrospective studies are conducted in 10 representative cases, which had developed acute or late toxicity in previous treatments and at present require new radiation treatments due to secondary malignancies or recurrence. Samples were in vitro irradiated with 2 Gy. MN data were analyzed comparing expected MN frequencies with values observed after in vitro irradiation. DNA repair capacity was evaluated through comet assay for initial damage and after specific times of repair (0-120 minutes). Captured images were analyzed by CASP image analysis software. Repair capacity was quantified by the Olive tail moment. Weibull alpha parameter was applied to describe DNA damage at the different evaluated repair times after in vitro irradiation and fitted by a mono-exponential model to describe the kinetic profile. In every evaluated patient a correlation between mean half-time (T1/2) and
Naganathar, Sriveena; De'Ath, Henry D; Wall, Johanna; Brohi, Karim
Secondary cardiac injury and dysfunction may be important contributors to poor outcomes in trauma patients, but the pathophysiology and clinical impact remain unclear. Early elevations in cardiac injury markers have been associated with the development of adverse cardiac events (ACEs), prolonged intensive care unit stays, and increased mortality. Studies of preinjury β-blocker use suggest a potential protective effect in critically ill trauma patients. This study aimed to prospectively examine the association of early biomarker evidence of trauma-induced secondary cardiac injury (TISCI) and ACEs and to examine the potential contribution of circulating catecholamines to its pathophysiology. Injured patients who met the study criteria were recruited at a single major trauma center. A blood sample was collected immediately on arrival. Serum epinephrine (E), norepinephrine (NE), and cardiac biomarkers including heart-related fatty acid binding protein (H-FABP) were assayed. Data were prospectively collected on ACEs. Of 300 patients recruited, 38 (13%) developed an ACE and had increased mortality (19% vs. 9%, p = 0.01) and longer intensive care unit stays (13 days, p < 0.001). H-FABP was elevated on admission in 56% of the patients, predicted the development of ACE, and was associated with higher mortality (14% vs. 5%, p = 0.01). Admission E and NE levels were strongly associated with elevations in H-FABP and ACEs (E, 274.0 pg/mL vs. 622.5 pg/mL, p < 0.001; NE, 1,063.2 pg/mL vs. 2,032.6 pg/mL, p < 0.001). Catecholamine effect on the development of TISCI or ACEs was not statistically independent of injury severity or depth of shock. Admission levels of H-FABP predict the development of ACEs and may be useful for prognosis and stratification of trauma patients. The development of TISCI and ACEs was associated with high admission levels of catecholamines, but their role in pathogenesis remains unclear. Clinical trials of adrenergic blockade may have the potential to
Flaherman, Valerie J; Bokser, Seth; Newman, Thomas B
Exclusive breastfeeding reduces infant infectious disease. Losing > or =10% birth weight may lead to formula use. The predictive value of first-day weight loss for subsequent weight loss has not been studied. The objective of the present study was to evaluate the relationship between weight loss at or =10%. For 1,049 infants, we extracted gestational age, gender, delivery method, feeding type, and weights from medical records. Weight nadir was defined as the lowest weight recorded during birth hospitalization. We used multivariate logistic regression to assess the effect of first-day weight loss on subsequent in-hospital weight loss. Mean in-hospital weight nadir was 6.0 +/- 2.6%, and mean age at in-hospital weight nadir was 38.7 +/- 18.5 hours. While in the hospital 6.4% of infants lost > or =10% of birth weight. Infants losing > or =4.5% birth weight at or =10% (adjusted odds ratio 3.57 [1.75, 7.28]). In this cohort, 798 (76.1%) infants did not have documented weight gain while in the hospital. Early weight loss predicts higher risk of > or =10% in-hospital weight loss. Infants with high first-day weight loss could be targeted for further research into improved interventions to promote breastfeeding.
Full Text Available Background: The prevalence, clinical patterns, and causative drugs of cutaneous adverse drug reactions (cADR vary among the different populations previously studied. Aim: To determine the prevalence, the clinical patterns of drug eruptions, and the common drugs implicated, particularly in severe cADR such as Stevens-Johnson Syndrome/Toxic epidermal necrolysis (SJS/TEN and drug rash with eosinophilia and systemic symptoms (DRESS in our population. Methods: We analyzed the database established for all cADR seen by the department of Dermatology from January 2001 till December 2010. Results: A total of 362 cADR were seen among 42 170 new clinic attendees, yielding an incidence rate of 0.86%. The most common reaction pattern seen was maculopapular eruption (153 cases followed by SJS/TEN (110 cases and DRESS (34 cases. Antibiotics was the most commonly implicated drug group (146 cases followed by anticonvulsants (81 cases and antigout drugs (50 cases. The most frequently implicated drug was allopurinol (50 cases. Carbamazepine, allopurinol, and cotrimoxazole were the three main causative drugs of SJS/TEN accounting for 21.8%, 20.9%, and 12.7%, respectively, of the 110 cases seen, whereas DRESS was mainly caused by allopurinol (15 cases. Mortality rates for TEN, SJS, and DRESS were 28.6%, 2.2%, and 5.9%, respectively Conclusions: The low rate of cADR with a high proportion of severe reactions observed in this study was probably due to referral bias. Otherwise, the reaction patterns and drugs causing cADR in our population were similar to those seen in other countries. Carbamazepine, allopurinol, and cotrimoxazole were the three main causative drugs of SJS/TEN in our population.
Choon, Siew-Eng; Lai, Nai-Ming
The prevalence, clinical patterns, and causative drugs of cutaneous adverse drug reactions (cADR) vary among the different populations previously studied. To determine the prevalence, the clinical patterns of drug eruptions, and the common drugs implicated, particularly in severe cADR such as Stevens-Johnson Syndrome/Toxic epidermal necrolysis (SJS/TEN) and drug rash with eosinophilia and systemic symptoms (DRESS) in our population. We analyzed the database established for all cADR seen by the department of Dermatology from January 2001 till December 2010. A total of 362 cADR were seen among 42 170 new clinic attendees, yielding an incidence rate of 0.86%. The most common reaction pattern seen was maculopapular eruption (153 cases) followed by SJS/TEN (110 cases) and DRESS (34 cases). Antibiotics was the most commonly implicated drug group (146 cases) followed by anticonvulsants (81 cases) and antigout drugs (50 cases). The most frequently implicated drug was allopurinol (50 cases). Carbamazepine, allopurinol, and cotrimoxazole were the three main causative drugs of SJS/TEN accounting for 21.8%, 20.9%, and 12.7%, respectively, of the 110 cases seen, whereas DRESS was mainly caused by allopurinol (15 cases). Mortality rates for TEN, SJS, and DRESS were 28.6%, 2.2%, and 5.9%, respectively. The low rate of cADR with a high proportion of severe reactions observed in this study was probably due to referral bias. Otherwise, the reaction patterns and drugs causing cADR in our population were similar to those seen in other countries. Carbamazepine, allopurinol, and cotrimoxazole were the three main causative drugs of SJS/TEN in our population.
Full Text Available Objectives: External cause International Classification of Diseases (ICD codes are commonly used to ascertain adverse drug reactions (ADRs related to hospitalisation. We quantified ascertainment of ADR-related hospitalisation using external cause codes and additional ICD-based hospital diagnosis codes. Methods: We reviewed the scientific literature to identify different ICD-based criteria for ADR-related hospitalisations, developed algorithms to capture ADRs based on candidate hospital ICD-10 diagnoses and external cause codes (Y40–Y59, and incorporated previously published causality ratings estimating the probability that a specific diagnosis was ADR related. We applied the algorithms to the NSW Admitted Patient Data Collection records of 45 and Up Study participants (2011–2013. Results: Of 493 442 hospitalisations among 267 153 study participants during 2011–2013, 18.8% (n = 92 953 had hospital diagnosis codes that were potentially ADR related; 1.1% (n = 5305 had high/very high–probability ADR-related diagnosis codes (causality ratings: A1 and A2; and 2.0% (n = 10 039 had ADR-related external cause codes. Overall, 2.2% (n = 11 082 of cases were classified as including an ADR-based hospitalisation on either external cause codes or high/very high–probability ADR-related diagnosis codes. Hence, adding high/very high–probability ADR-related hospitalisation codes to standard external cause codes alone (Y40–Y59 increased the number of hospitalisations classified as having an ADR-related diagnosis by 10.4%. Only 6.7% of cases with high-probability ADR-related mental symptoms were captured by external cause codes. Conclusion: Selective use of high-probability ADR-related hospital diagnosis codes in addition to external cause codes yielded a modest increase in hospitalised ADR incidence, which is of potential clinical significance. Clinically validated combinations of diagnosis codes could potentially further enhance capture.
Aguiar, Fabio S; Almeida, Luciana L; Ruffino-Netto, Antonio; Kritski, Afranio Lineu; Mello, Fernanda Cq; Werneck, Guilherme L
Tuberculosis (TB) remains a public health issue worldwide. The lack of specific clinical symptoms to diagnose TB makes the correct decision to admit patients to respiratory isolation a difficult task for the clinician. Isolation of patients without the disease is common and increases health costs. Decision models for the diagnosis of TB in patients attending hospitals can increase the quality of care and decrease costs, without the risk of hospital transmission. We present a predictive model for predicting pulmonary TB in hospitalized patients in a high prevalence area in order to contribute to a more rational use of isolation rooms without increasing the risk of transmission. Cross sectional study of patients admitted to CFFH from March 2003 to December 2004. A classification and regression tree (CART) model was generated and validated. The area under the ROC curve (AUC), sensitivity, specificity, positive and negative predictive values were used to evaluate the performance of model. Validation of the model was performed with a different sample of patients admitted to the same hospital from January to December 2005. We studied 290 patients admitted with clinical suspicion of TB. Diagnosis was confirmed in 26.5% of them. Pulmonary TB was present in 83.7% of the patients with TB (62.3% with positive sputum smear) and HIV/AIDS was present in 56.9% of patients. The validated CART model showed sensitivity, specificity, positive predictive value and negative predictive value of 60.00%, 76.16%, 33.33%, and 90.55%, respectively. The AUC was 79.70%. The CART model developed for these hospitalized patients with clinical suspicion of TB had fair to good predictive performance for pulmonary TB. The most important variable for prediction of TB diagnosis was chest radiograph results. Prospective validation is still necessary, but our model offer an alternative for decision making in whether to isolate patients with clinical suspicion of TB in tertiary health facilities in
Psychiatric conditions and general practitioner attendance prior to HPV vaccination and the risk of referral to a specialized hospital setting because of suspected adverse events following HPV vaccination
Lützen, Tina Hovgaard; Bech, Bodil Hammer; Mehlsen, Jesper
centers, and health data for cases and controls were obtained from national registries. PARTICIPANTS: Cases were defined as women referred to an HPV center between January 1, 2015 and December 31, 2015 (n=1,496). Each case was matched with five controls on age, region and time of first vaccine......AIM: No association between human papilloma virus (HPV) vaccination and numerous diseases has been found. Still, a large number of Danish women are reporting suspected adverse events. Other factors may play a role, and the aim of this study is to examine the association between psychiatric...... registration. The total study population consisted of 8,976 women. RESULTS: Overall, women above 18 years who had been referred to an HPV center were more likely to have used psychiatric medication (odds ratio [OR]: 1.88 [95% CI 1.48-2.40]) or to have been hospitalized because of a psychiatric disorder within...
Nakstad Anders R
Full Text Available Abstract Introduction Endotracheal intubation (ETI has been considered an essential part of pre-hospital advanced life support. Pre-hospital ETI, however, is a complex intervention also for airway specialist like anaesthesiologists working as pre-hospital emergency physicians. We therefore wanted to investigate the quality of pre-hospital airway management by anaesthesiologists in severely traumatised patients and identify possible areas for improvement. Method We performed a risk assessment according to the predictive Bayesian approach, in a typical anaesthesiologist-manned Norwegian helicopter emergency medical service (HEMS. The main focus of the risk assessment was the event where a patient arrives in the emergency department without ETI despite a pre-hospital indication for it. Results In the risk assessment, we assigned a high probability (29% for the event assessed, that a patient arrives without ETI despite a pre-hospital indication. However, several uncertainty factors in the risk assessment were identified related to data quality, indications for use of ETI, patient outcome and need for special training of ETI providers. Conclusion Our risk assessment indicated a high probability for trauma patients with an indication for pre-hospital ETI not receiving it in the studied HEMS. The uncertainty factors identified in the assessment should be further investigated to better understand the problem assessed and consequences for the patients. Better quality of pre-hospital airway management data could contribute to a reduction of these uncertainties.
Yen, Tin-Wing; Payne, Beth; Qu, Ziguang; Hutcheon, Jennifer A; Lee, Tang; Magee, Laura A; Walters, Barry N; von Dadelszen, Peter
Preeclampsia is a leading cause of maternal morbidity. The clinical challenge lies in predicting which women with preeclampsia will suffer adverse outcomes and would benefit from treatment, while minimizing potentially harmful interventions. Our aim was to determine the ability of maternal symptoms (i.e., severe nausea or vomiting, headache, visual disturbance, right upper quadrant pain or epigastric pain, abdominal pain or vaginal bleeding, and chest pain or dyspnea) to predict adverse maternal or perinatal outcomes. We used data from the PIERS (Pre-eclampsia Integrated Estimate of RiSk) study, a multicentre, prospective cohort study designed to investigate the maternal risks associated with preeclampsia. Relative risks and receiver operating characteristic (ROC) curves were assessed for each preeclampsia symptom and outcome pair. Of 2023 women who underwent assessment, 52% experienced at least one preeclampsia symptom, with 5.2% and 5.3% respectively experiencing an adverse maternal or perinatal outcome. No symptom and outcome pair, in either of the maternal or perinatal groups, achieved an area under the ROC curve value > 0.7, which would be necessary to demonstrate a discriminatory predictive value. Maternal symptoms of preeclampsia are not independently valid predictors of maternal adverse outcome. Caution should be used when making clinical decisions on the basis of symptoms alone in the preeclamptic patient.
Comparison of neutrophil-to-lymphocyte ratio and mean platelet volume in the prediction of adverse events after primary percutaneous coronary intervention in patients with ST-elevation myocardial infarction.
Machado, Guilherme Pinheiro; Araujo, Gustavo Neves de; Carpes, Christian Kunde; Lech, Mateus; Mariani, Stefani; Valle, Felipe Homem; Bergoli, Luiz Carlos Corsetti; Gonçalves, Sandro Cadaval; Wainstein, Rodrigo V; Wainstein, Marco V
Elevated neutrophil-to-lymphocyte ratio (NLR) and mean platelet volume (MPV) are indirect inflammatory markers. There is some evidence that both are associated with worse outcomes in ST-segment elevation myocardial infarction (STEMI) after primary percutaneous coronary intervention (PCI). The aim of the present study was to compare the capacity of NLR and MPV to predict adverse events after primary PCI. In a prospective cohort study, 625 consecutive patients with STEMI, who underwent primary PCI, were followed. Receiver operating characteristic (ROC) curve analysis was performed to calculate the area under the curve (AUC) for the occurrence of procedural complications, mortality and major adverse cardiovascular events (MACE). Mean age was 60.7 (±12.1) years, 67.5% were male. The median of NLR was 6.17 (3.8-9.4) and MPV was 10.7 (10.0-11.3). In multivariate analysis, both NLR and MPV remained independent predictors of no-reflow (relative risk [RR] = 2.26; 95%confidence interval [95%CI] = 1.16-4.32; p = 0.01 and RR = 2.68; 95%CI = 1.40-5.10; p 0.05). NLR had an excellent negative predictive value (NPV) of 96.7 for no-reflow and 89.0 for in-hospital MACE. Despite no difference in the ROC curve comparison with MPV, only NLR remained an independent predictor for in-hospital MACE. A low NLR has an excellent NPV for no-reflow and in-hospital MACE, and this could be of clinical relevance in the management of low-risk patients. Copyright © 2018 Elsevier B.V. All rights reserved.
Salanitro, Amanda H; Hovater, Martha; Hearld, Kristine R; Roth, David L; Sawyer, Patricia; Locher, Julie L; Bodner, Eric; Brown, Cynthia J; Allman, Richard M; Ritchie, Christine S
To determine whether cumulative symptom burden predicts hospitalization or emergency department (ED) visits in a cohort of older adults. Prospective, observational study with a baseline in-home assessment of symptom burden. Central Alabama. Nine hundred eighty community-dwelling adults aged 65 and older (mean 75.3 ± 6.7) recruited from a random sample of Medicare beneficiaries stratified according to sex, race, and urban/rural residence. Symptom burden score (range 0-10). One point was given for each symptom reported: shortness of breath, tiredness or fatigue, problems with balance or dizziness, leg weakness, poor appetite, pain, stiffness, constipation, anxiety, and loss of interest in activities. Dependent variables were hospitalizations and ED visits, assessed every 6 months during the 8.5-year follow-up period. Using Cox proportional hazards models, time from the baseline in-home assessment to the first hospitalization and first hospitalization or ED visit was determined. During the 8.5-year follow-up period, 545 (55.6%) participants were hospitalized or had an ED visit. Participants with greater symptom burden had higher risk of hospitalization (hazard ratio (HR) = 1.09, 95% confidence interval (CI) = 1.05-1.14) and hospitalization or ED visit (HR = 1.10, 95% CI = 1.06-1.14) than those with lower scores. Participants living in rural areas had significantly lower risk of hospitalization (HR = 0.83, 95% CI = 0.69-0.99) and hospitalization or ED visit (HR = 0.80, 95% CI = 0.70-0.95) than individuals in urban areas, independent of symptom burden and comorbidity. Greater symptom burden was associated with higher risk of hospitalization and ED visits in community-dwelling older adults. Healthcare providers treating older adults should consider symptom burden to be an additional risk factor for subsequent hospital utilization. © 2012, Copyright the Authors Journal compilation © 2012, The American Geriatrics Society.
Coiera, Enrico; Wang, Ying; Magrabi, Farah; Concha, Oscar Perez; Gallego, Blanca; Runciman, William
Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models. Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria. There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known 'weekend-effect' where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94). Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual's hospitalization. Combining disease specific models with such time varying- estimates appears to
Martinez, Bruno Prata; Gomes, Isabela Barboza; Oliveira, Carolina Santana de; Ramos, Isis Resende; Rocha, Mônica Diniz Marques; Forgiarini Júnior, Luiz Alberto; Camelier, Fernanda Warken Rosa; Camelier, Aquiles Assunção
The ability of the Timed Up and Go test to predict sarcopenia has not been evaluated previously. The objective of this study was to evaluate the accuracy of the Timed Up and Go test for predicting sarcopenia in elderly hospitalized patients. This cross-sectional study analyzed 68 elderly patients (≥60 years of age) in a private hospital in the city of Salvador-BA, Brazil, between the 1st and 5th day of hospitalization. The predictive variable was the Timed Up and Go test score, and the outcome of interest was the presence of sarcopenia (reduced muscle mass associated with a reduction in handgrip strength and/or weak physical performance in a 6-m gait-speed test). After the descriptive data analyses, the sensitivity, specificity and accuracy of a test using the predictive variable to predict the presence of sarcopenia were calculated. In total, 68 elderly individuals, with a mean age 70.4±7.7 years, were evaluated. The subjects had a Charlson Comorbidity Index score of 5.35±1.97. Most (64.7%) of the subjects had a clinical admission profile; the main reasons for hospitalization were cardiovascular disorders (22.1%), pneumonia (19.1%) and abdominal disorders (10.2%). The frequency of sarcopenia in the sample was 22.1%, and the mean length of time spent performing the Timed Up and Go test was 10.02±5.38 s. A time longer than or equal to a cutoff of 10.85 s on the Timed Up and Go test predicted sarcopenia with a sensitivity of 67% and a specificity of 88.7%. The accuracy of this cutoff for the Timed Up and Go test was good (0.80; IC=0.66-0.94; p=0.002). The Timed Up and Go test was shown to be a predictor of sarcopenia in elderly hospitalized patients.
Jamei, Mehdi; Nisnevich, Aleksandr; Wetchler, Everett; Sudat, Sylvia; Liu, Eric
Avoidable hospital readmissions not only contribute to the high costs of healthcare in the US, but also have an impact on the quality of care for patients. Large scale adoption of Electronic Health Records (EHR) has created the opportunity to proactively identify patients with high risk of hospital readmission, and apply effective interventions to mitigate that risk. To that end, in the past, numerous machine-learning models have been employed to predict the risk of 30-day hospital readmission. However, the need for an accurate and real-time predictive model, suitable for hospital setting applications still exists. Here, using data from more than 300,000 hospital stays in California from Sutter Health's EHR system, we built and tested an artificial neural network (NN) model based on Google's TensorFlow library. Through comparison with other traditional and non-traditional models, we demonstrated that neural networks are great candidates to capture the complexity and interdependency of various data fields in EHRs. LACE, the current industry standard, showed a precision (PPV) of 0.20 in identifying high-risk patients in our database. In contrast, our NN model yielded a PPV of 0.24, which is a 20% improvement over LACE. Additionally, we discussed the predictive power of Social Determinants of Health (SDoH) data, and presented a simple cost analysis to assist hospitalists in implementing helpful and cost-effective post-discharge interventions.
Full Text Available Avoidable hospital readmissions not only contribute to the high costs of healthcare in the US, but also have an impact on the quality of care for patients. Large scale adoption of Electronic Health Records (EHR has created the opportunity to proactively identify patients with high risk of hospital readmission, and apply effective interventions to mitigate that risk. To that end, in the past, numerous machine-learning models have been employed to predict the risk of 30-day hospital readmission. However, the need for an accurate and real-time predictive model, suitable for hospital setting applications still exists. Here, using data from more than 300,000 hospital stays in California from Sutter Health's EHR system, we built and tested an artificial neural network (NN model based on Google's TensorFlow library. Through comparison with other traditional and non-traditional models, we demonstrated that neural networks are great candidates to capture the complexity and interdependency of various data fields in EHRs. LACE, the current industry standard, showed a precision (PPV of 0.20 in identifying high-risk patients in our database. In contrast, our NN model yielded a PPV of 0.24, which is a 20% improvement over LACE. Additionally, we discussed the predictive power of Social Determinants of Health (SDoH data, and presented a simple cost analysis to assist hospitalists in implementing helpful and cost-effective post-discharge interventions.
Rubin, Daniel J; Golden, Sherita Hill; McDonnell, Marie E; Zhao, Huaqing
To develop and validate a tool that predicts 30d readmission risk of patients with diabetes hospitalized for cardiovascular disease (CVD), the Diabetes Early Readmission Risk Indicator-CVD (DERRI-CVD™). A cohort of 8189 discharges was retrospectively selected from electronic records of adult patients with diabetes hospitalized for CVD. Discharges of 60% of the patients (n=4950) were randomly selected as a training sample and the remaining 40% (n=3219) were the validation sample. Statistically significant predictors of all-cause 30d readmission risk were identified by multivariable logistic regression modeling: education level, employment status, living within 5miles of the hospital, pre-admission diabetes therapy, macrovascular complications, admission serum creatinine and albumin levels, having a hospital discharge within 90days pre-admission, and a psychiatric diagnosis. Model discrimination and calibration were good (C-statistic 0.71). Performance in the validation sample was comparable. Predicted 30d readmission risk was similar in the training and validation samples (38.6% and 35.1% in the highest quintiles). The DERRI-CVD™ may be a valid tool to predict all-cause 30d readmission risk of patients with diabetes hospitalized for CVD. Identifying high-risk patients may encourage the use of interventions targeting those at greatest risk, potentially leading to better outcomes and lower healthcare costs. Copyright © 2017 Elsevier Inc. All rights reserved.
Conway, Terry L
This study of women sailors examined whether tobacco use prior to entering the Navy predicted subsequent career outcomes related to length of service, early attrition, misconduct, and hospitalizations...
Rita A. Mukhtar
Full Text Available Annual volume of pancreatic resections has been shown to affect mortality rates, prompting recommendations to regionalize these procedures to high-volume hospitals. Implementation has been difficult, given the paucity of high-volume centers and the logistical hardships facing patients. Some studies have shown that low-volume hospitals achieve good outcomes as well, suggesting that other factors are involved. We sought to determine whether variations in annual volume affected patient outcomes in 511 patients who underwent pancreatic resections at the University of California, San Francisco between 1990 and 2005. We compared postoperative mortality and complication rates between low, medium, or high volume years, designated by the number of resections performed, adjusting for patient characteristics. Postoperative mortality rates did not differ between high volume years and medium/low volume years. As annual hospital volume of pancreatic resections may not predict outcome, identification of actual predictive factors may allow low-volume centers to achieve excellent outcomes.
Kenneth Anene Agu
Full Text Available Purpose: The study evaluated the knowledge and attitudes of HIV-infected patients on ART regarding ADRs following routine patient counseling and education in selected hospitals in Nigeria. Materials and Methods: From 36,459 HIV-infected patients on ART in the 36 selected hospitals, a study-specific instrument was administered to 3,650 patients in a cross-sectional study. Patients were provided counseling and education on ADRs before and after commencing ART. Factor analysis was performed using principal components extraction. Item score means above midpoint (3.7 on a 5-point scale were regarded as positive attitudes and below as negative attitudes. A chi-square test was used for inferential statistics; P3.7 which denotes positive attitudes to ADRs. Three extracted factors accounted for 73.1% of cumulative variability. All attitude items had very significant loadings of ≥0.5. Conclusion: Overall, participants reported good knowledge and positive attitudes to adverse effects of their medicines compared to what was reported previously. The patient counseling and education on drug therapy provided to patients may have contributed to these findings and are highly recommended.
Cummins, Niamh Maria
BACKGROUND: Accurate patient diagnosis in the prehospital environment is essential to initiate suitable care pathways. The advanced paramedic (AP) is a relatively recent role in Ireland, and refers to a prehospital practitioner with advanced life-support skills and training. OBJECTIVES: The objectives of this study were to compare the diagnostic decisions of APs with emergency medicine (EM) physicians, and to investigate if APs, as currently trained, can predict the requirement for hospital admission. METHODS: A prospective study was initiated, whereby each emergency ambulance call received via the statutory 999 system was recorded by the attending AP. The AP was asked to provide a clinical diagnosis for each patient, and to predict if hospital admission was required. The data was then cross-referenced with the working diagnosis of the receiving emergency physician and the hospital admission records. RESULTS: A total of 17 APs participated in the study, and 1369 emergency calls were recorded over a 6-month period. Cases where a general practitioner attended the scene were excluded from the concordance analysis. Concordance with the receiving emergency physician represents 70% (525\\/748) for all cases of AP diagnosis, and is mirrored with 70% (604\\/859) correct hospital admission predictions. CONCLUSIONS: AP diagnosis and admission prediction for emergency calls is similar to other emergency medical services systems despite the relative recency of the AP programme in Ireland. Recognition of non-concordance case types may identify priorities for AP education, and drive future AP practice in areas such as \\'treat and refer\\'.
Voors, Adriaan A.; Ouwerkerk, Wouter; Zannad, Faiez; van Veldhuisen, Dirk J.; Samani, Nilesh J.; Ponikowski, Piotr; Ng, Leong L.; Metra, Marco; ter Maaten, Jozine M.; Lang, Chim C.; Hillege, Hans L.; van der Harst, Pim; Filippatos, Gerasimos; Dickstein, Kenneth; Cleland, John G.; Anker, Stefan D.; Zwinderman, Aeilko H.
Introduction From a prospective multicentre multicountry clinical trial, we developed and validated risk models to predict prospective all-cause mortality and hospitalizations because of heart failure (HF) in patients with HF. Methods and results BIOSTAT-CHF is a research programme designed to
Voors, Adriaan A.; Ouwerkerk, Wouter; Zannad, Faiez; van Veldhuisen, Dirk J.; Samani, Nilesh J.; Ponikowski, Piotr; Ng, Leong L.; Metra, Marco; ter Maaten, Jozine M.; Lang, Chim C.; Hillege, Hans L.; van der Harst, Pim; Filippatos, Gerasimos; Dickstein, Kenneth; Cleland, John G.; Anker, Stefan D.; Zwinderman, Aeilko H.
Introduction From a prospective multicentre multicountry clinical trial, we developed and validated risk models to predict prospective all-cause mortality and hospitalizations because of heart failure (HF) in patients with HF. Methods and results BIOSTAT-CHF is a research programme designed to
Full Text Available Ischemic heart disease (IHD is a leading cause of death worldwide. Urban public health and medical management in Shenzhen, an international city in the developing country of China, is challenged by an increasing burden of IHD. This study analyzed the spatio-temporal variation of IHD hospital admissions from 2003 to 2012 utilizing spatial statistics, spatial analysis, and space-time scan statistics. The spatial statistics and spatial analysis measured the incidence rate (hospital admissions per 1,000 residents and the standardized rate (the observed cases standardized by the expected cases of IHD at the district level to determine the spatio-temporal distribution and identify patterns of change. The space-time scan statistics was used to identify spatio-temporal clusters of IHD hospital admissions at the district level. The other objective of this study was to forecast the IHD hospital admissions over the next three years (2013–2015 to predict the IHD incidence rates and the varying burdens of IHD-related medical services among the districts in Shenzhen. The results show that the highest hospital admissions, incidence rates, and standardized rates of IHD are in Futian. From 2003 to 2012, the IHD hospital admissions exhibited similar mean centers and directional distributions, with a slight increase in admissions toward the north in accordance with the movement of the total population. The incidence rates of IHD exhibited a gradual increase from 2003 to 2012 for all districts in Shenzhen, which may be the result of the rapid development of the economy and the increasing traffic pollution. In addition, some neighboring areas exhibited similar temporal change patterns, which were also detected by the spatio-temporal cluster analysis. Futian and Dapeng would have the highest and the lowest hospital admissions, respectively, although these districts have the highest incidence rates among all of the districts from 2013 to 2015 based on the prediction
Gatti, Giuseppe; Perrotti, Andrea; Obadia, Jean-François; Duval, Xavier; Iung, Bernard; Alla, François; Chirouze, Catherine; Selton-Suty, Christine; Hoen, Bruno; Sinagra, Gianfranco; Delahaye, François; Tattevin, Pierre; Le Moing, Vincent; Pappalardo, Aniello; Chocron, Sidney
Aspecific scoring systems are used to predict the risk of death postsurgery in patients with infective endocarditis (IE). The purpose of the present study was both to analyze the risk factors for in-hospital death, which complicates surgery for IE, and to create a mortality risk score based on the results of this analysis. Outcomes of 361 consecutive patients (mean age, 59.1±15.4 years) who had undergone surgery for IE in 8 European centers of cardiac surgery were recorded prospectively, and a risk factor analysis (multivariable logistic regression) for in-hospital death was performed. The discriminatory power of a new predictive scoring system was assessed with the receiver operating characteristic curve analysis. Score validation procedures were carried out. Fifty-six (15.5%) patients died postsurgery. BMI >27 kg/m 2 (odds ratio [OR], 1.79; P =0.049), estimated glomerular filtration rate 55 mm Hg (OR, 1.78; P =0.032), and critical state (OR, 2.37; P =0.017) were independent predictors of in-hospital death. A scoring system was devised to predict in-hospital death postsurgery for IE (area under the receiver operating characteristic curve, 0.780; 95% CI, 0.734-0.822). The score performed better than 5 of 6 scoring systems for in-hospital death after cardiac surgery that were considered. A simple scoring system based on risk factors for in-hospital death was specifically created to predict mortality risk postsurgery in patients with IE. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
P. Ansuman Abhisek
Full Text Available Introduction: The number of patients receiving renal replacement therapy in the form of dialysis or transplant has been increasing in recent years. Increased frequency of monitoring due to complex therapeutic regimen and inappropriate use of drugs may lead to increased Adverse Events (AEs, hospital stay, cost of treatment as well as increased morbidity and mortality. Aim: To analyse utilisation pattern of drugs and AEs in Chronic Kidney Disease (CKD patients undergoing maintenance haemodialysis. Materials and Methods: This prospective, observational study was conducted in the Department of Pharmacology in collaboration with Department of Nephrology, SCB Medical College and Hospital, Cuttack, from 1st June to 31st December, 2015. Demographic, clinical and medicine details were collected from patients’ case sheet, matched with nursing case records and tabulated in a predesigned case study form. The data were analysed in a descriptive manner using percentage calculation and Spearman’s correlation, multiple logistic regression using trial version SPSS v24. Results: A total number of 115 cases were included in this study. Average number of drugs used, per prescription was 12.8 during the dialysis and non-dialysis days. Most frequently used drugs were antihypertensives, 25% dextrose and heparin (before dialysis and on dialysis days were prescribed to all patients followed by haematinics in 90.43% of the patients and proton pump inhibitors were prescribed in 70.43% of the patients. Among 1472 drugs prescribed, 40.96% of the drugs were prescribed in generic name. It was observed that 72.62% of the prescribed drugs were from National List of Essential Medicine. AEs were observed with varying severity in all the patients. Frequently observed AEs as per the laboratory investigations were hyperphosphataemia, hyponatraemia, metabolic acidosis, hyperkalemia, hypoglycaemia, hypocalcaemia and hypokalemia. AEs were statistically significant with age group
Ramaekers, Rosa; Mukarram, Muhammad; Smith, Christine A M; Thiruganasambandamoorthy, Venkatesh
Risk stratification of emergency department (ED) patients with upper gastrointestinal bleeding (UGIB) using preendoscopic risk scores can aid ED physicians in disposition decision-making. We conducted a systematic review to assess the predictive value of preendoscopic risk scores for 30-day serious adverse events. We searched MEDLINE, PubMed, Embase, and the Cochrane Database of Systematic Reviews from inception to March 2015. We included studies involving adult ED UGIB patients evaluating preendoscopic risk scores and excluded reviews, case reports, and animal studies. The composite outcome included 30-day mortality, recurrent bleeding, and need for intervention. In two phases (screening and full review), two reviewers independently screened articles for inclusion and extracted patient-level data. The consensus data were used for analysis. We reported sensitivity, specificity, positive and negative predictive value, and positive and negative likelihood ratios with 95% confidence intervals. We identified 3,173 articles, of which 16 were included: three studied Glasgow Blatchford score (GBS); one studied clinical Rockall score (cRockall); two studied AIMS65; six compared GBS and cRockall; three compared GBS, a modification of the GBS, and cRockall; and one compared the GBS and AIMS65. Overall, the sensitivity and specificity of the GBS were 0.98 and 0.16, respectively; for the cRockall they were 0.93 and 0.24, respectively; and for the AIMS65 they were 0.79 and 0.61, respectively. The GBS with a cutoff point of 0 had a sensitivity of 0.99 and a specificity of 0.08. The GBS with a cutoff point of 0 was superior over other cutoff points and risk scores for identifying low-risk patients but had a very low specificity. None of the risk scores identified by our systematic review were robust and, hence, cannot be recommended for use in clinical practice. Future prospective studies are needed to develop robust new scores for use in ED patients with UGIB. © 2016 by the
Full Text Available The aim of this study was to assess the extent of poly-pharmacy, occurrence, and associated factors for the occurrence of drug-drug interaction (DDI and potential adverse drug reaction (ADR in Gondar University Teaching Referral Hospital. Institutional-based retrospective cross-sectional study. This study was conducted on prescriptions of both in and out-patients for a period of 3 months at Gondar University Hospital. Both bivariate analysis and multivariate logistic regression were used to identify risk factors for the occurrence of DDI and possible ADRs. All the statistical calculations were performed using SPSS; software. A total of 12,334 prescriptions were dispensed during the study period of which, 2,180 prescriptions were containing two or more drugs per prescription. A total of 21,210 drugs were prescribed and the average number of drugs per prescription was 1.72. Occurrences of DDI of all categories (Major, Moderate, and Minor were analyzed and DDI were detected in 711 (32.6% prescriptions. Sex was not found to be a risk factor for the occurrence of DDI and ADR, while age and number of medications per prescription were found to be significant risk factors for the occurrence of DDI and ADR. The mean number of drugs per prescription was 1.72 and hence with regard to the WHO limit of drugs per prescription, Gondar hospital was able to maintain the limit and prescriptions containing multiple drugs supposed to be taken systemically. Numbers of drugs per prescription as well as older age were found to be predisposing factors for the occurrence of DDI and potential ADRs while sex was not a risk factor.
Lucini, Filipe R; S Fogliatto, Flavio; C da Silveira, Giovani J; L Neyeloff, Jeruza; Anzanello, Michel J; de S Kuchenbecker, Ricardo; D Schaan, Beatriz
Emergency department (ED) overcrowding is a serious issue for hospitals. Early information on short-term inward bed demand from patients receiving care at the ED may reduce the overcrowding problem, and optimize the use of hospital resources. In this study, we use text mining methods to process data from early ED patient records using the SOAP framework, and predict future hospitalizations and discharges. We try different approaches for pre-processing of text records and to predict hospitalization. Sets-of-words are obtained via binary representation, term frequency, and term frequency-inverse document frequency. Unigrams, bigrams and trigrams are tested for feature formation. Feature selection is based on χ 2 and F-score metrics. In the prediction module, eight text mining methods are tested: Decision Tree, Random Forest, Extremely Randomized Tree, AdaBoost, Logistic Regression, Multinomial Naïve Bayes, Support Vector Machine (Kernel linear) and Nu-Support Vector Machine (Kernel linear). Prediction performance is evaluated by F1-scores. Precision and Recall values are also informed for all text mining methods tested. Nu-Support Vector Machine was the text mining method with the best overall performance. Its average F1-score in predicting hospitalization was 77.70%, with a standard deviation (SD) of 0.66%. The method could be used to manage daily routines in EDs such as capacity planning and resource allocation. Text mining could provide valuable information and facilitate decision-making by inward bed management teams. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Aurélie Di Bartolomeo
Full Text Available The study aimed to evaluate if the rate of tissue factor pathway inhibitor during pregnancy and following delivery could be a predictive factor for placenta-mediated adverse pregnancy outcomes in high-risk women.This was a prospective multicentre cohort study of 200 patients at a high risk of occurrence or recurrence of placenta-mediated adverse pregnancy outcomes conducted between June 2008 and October 2010. Measurements of tissue factor pathway inhibitor resistance (normalized ratio and tissue factor pathway inhibitor activity were performed for the last 72 patients at 20, 24, 28, 32, and 36 weeks of gestation and during the postpartum period.Overall, 15 patients presented a placenta-mediated adverse pregnancy outcome. There was no difference in normalized tissue factor pathway inhibitor ratios between patients with and without placenta-mediated adverse pregnancy outcomes during pregnancy and in the post-partum period. Patients with placenta-mediated adverse pregnancy outcomes had tissue factor pathway inhibitor activity rates that were significantly higher than those in patients without at as early as 24 weeks of gestation. The same results were observed following delivery.Among high-risk women, the tissue factor pathway inhibitor activity of patients with gestational vascular complications is higher than that in other patients. Hence, these markers could augment a screening strategy that includes an analysis of angiogenic factors as well as clinical and ultrasound imaging with Doppler measurement of the uterine arteries.
Zhang, Xingyu; Kim, Joyce; Patzer, Rachel E; Pitts, Stephen R; Patzer, Aaron; Schrager, Justin D
To describe and compare logistic regression and neural network modeling strategies to predict hospital admission or transfer following initial presentation to Emergency Department (ED) triage with and without the addition of natural language processing elements. Using data from the National Hospital Ambulatory Medical Care Survey (NHAMCS), a cross-sectional probability sample of United States EDs from 2012 and 2013 survey years, we developed several predictive models with the outcome being admission to the hospital or transfer vs. discharge home. We included patient characteristics immediately available after the patient has presented to the ED and undergone a triage process. We used this information to construct logistic regression (LR) and multilayer neural network models (MLNN) which included natural language processing (NLP) and principal component analysis from the patient's reason for visit. Ten-fold cross validation was used to test the predictive capacity of each model and receiver operating curves (AUC) were then calculated for each model. Of the 47,200 ED visits from 642 hospitals, 6,335 (13.42%) resulted in hospital admission (or transfer). A total of 48 principal components were extracted by NLP from the reason for visit fields, which explained 75% of the overall variance for hospitalization. In the model including only structured variables, the AUC was 0.824 (95% CI 0.818-0.830) for logistic regression and 0.823 (95% CI 0.817-0.829) for MLNN. Models including only free-text information generated AUC of 0.742 (95% CI 0.731- 0.753) for logistic regression and 0.753 (95% CI 0.742-0.764) for MLNN. When both structured variables and free text variables were included, the AUC reached 0.846 (95% CI 0.839-0.853) for logistic regression and 0.844 (95% CI 0.836-0.852) for MLNN. The predictive accuracy of hospital admission or transfer for patients who presented to ED triage overall was good, and was improved with the inclusion of free text data from a patient
Kelder Johannes C
Full Text Available Abstract Background Casemix adjusted in-hospital mortality is one of the measures used to improve quality of care. The adjustment currently used does not take into account the effects of readmission, because reliable data on readmission is not readily available through routinely collected databases. We have studied the impact of readmissions by linking admissions of the same patient, and as a result were able to compare hospital mortality among frequently, as opposed to, non-frequently readmitted patients. We also formulated a method to adjust for readmission for the calculation of hospital standardised mortality ratios (HSMRs. Methods We conducted a longitudinal retrospective analysis of routinely collected hospital data of six large non-university teaching hospitals in the Netherlands with casemix adjusted standardised mortality ratios ranging from 65 to 114 and a combined value of 93 over a five-year period. Participants concerned 240662 patients admitted 418566 times in total during the years 2003 - 2007. Predicted deaths by the HSMR model 2008 over a five-year period were compared with observed deaths. Results Numbers of readmissions per patient differ substantially between the six hospitals, up to a factor of 2. A large interaction was found between numbers of admissions per patient and HSMR-predicted risks. Observed deaths for frequently admitted patients were significantly lower than HSMR-predicted deaths, which could be explained by uncorrected factors surrounding readmissions. Conclusions Patients admitted more frequently show lower risks of dying on average per admission. This decline in risk is only partly detected by the current HSMR. Comparing frequently admitted patients to non-frequently admitted patients commits the constant risk fallacy and potentially lowers HSMRs of hospitals treating many frequently admitted patients and increases HSMRs of hospitals treating many non-frequently admitted patients. This misleading effect can
Background Casemix adjusted in-hospital mortality is one of the measures used to improve quality of care. The adjustment currently used does not take into account the effects of readmission, because reliable data on readmission is not readily available through routinely collected databases. We have studied the impact of readmissions by linking admissions of the same patient, and as a result were able to compare hospital mortality among frequently, as opposed to, non-frequently readmitted patients. We also formulated a method to adjust for readmission for the calculation of hospital standardised mortality ratios (HSMRs). Methods We conducted a longitudinal retrospective analysis of routinely collected hospital data of six large non-university teaching hospitals in the Netherlands with casemix adjusted standardised mortality ratios ranging from 65 to 114 and a combined value of 93 over a five-year period. Participants concerned 240662 patients admitted 418566 times in total during the years 2003 - 2007. Predicted deaths by the HSMR model 2008 over a five-year period were compared with observed deaths. Results Numbers of readmissions per patient differ substantially between the six hospitals, up to a factor of 2. A large interaction was found between numbers of admissions per patient and HSMR-predicted risks. Observed deaths for frequently admitted patients were significantly lower than HSMR-predicted deaths, which could be explained by uncorrected factors surrounding readmissions. Conclusions Patients admitted more frequently show lower risks of dying on average per admission. This decline in risk is only partly detected by the current HSMR. Comparing frequently admitted patients to non-frequently admitted patients commits the constant risk fallacy and potentially lowers HSMRs of hospitals treating many frequently admitted patients and increases HSMRs of hospitals treating many non-frequently admitted patients. This misleading effect can only be demonstrated by an
Oz, Ibrahim Ilker; Altınsoy, Bülent; Serifoglu, Ismail; Sayın, Rasit; Buyukuysal, Mustafa Cagatay; Erboy, Fatma; Akduman, Ece Isin
The aim of this study was to examine the association between right atrium (RA) and right ventricle (RV) diameters on computed tomography (CT) pulmonary angiography in response to acute pulmonary embolism (APE), in addition to 30-day mortality and adverse outcomes in patients with APE. This retrospective study was accepted by the institutional ethics committee. From January 2013 to March 2014, 79 hospitalized adult patients with symptomatic APE were included. Inclusion criteria were a CT pulmonary angiography positive for pulmonary embolism, availability of patient records, and a follow-up of at least 30 days. A review of patient records and images was performed. The maximum diameters of the heart chambers were measured on a reconstructed four-chamber heart view, and the vascular obstruction index was calculated on CT pulmonary angiography. There were statistically significant relationships in both the RA/RV diameter ratio and the RV/left ventricle (LV) diameter ratio between patients with and without adverse outcomes (prights reserved.
An adverse outcome pathway (AOP) description linking inhibition of aromatase (cytochrome P450 [cyp] 19) to reproductive dysfunction was reviewed for scientific and technical quality and endorsed by the OECD. An intended application of the AOP framework is to support the use of me...
An adverse outcome pathway (AOP) description linking inhibition of aromatase (cytochrome P450 [cyp] 19) to reproductive dysfunction was reviewed for scientific and technical quality and endorsed by the OECD (https://aopwiki.org/wiki/index.php/Aop:25). An intended application of t...
Chida, Yoichi; Vedhara, Kavita
There is a growing epidemiological literature focusing on the association between psychosocial stress and human immunodeficiency virus (HIV) disease progression or acquired immunodeficiency syndrome (AIDS), but inconsistent findings have been published. We aimed to quantify the association between adverse psychosocial factors and HIV disease progression. We searched Medline; PsycINFO; Web of Science; PubMed up to 19 January 2009, and included population studies with a prospective design that investigated associations between adverse psychosocial factors and HIV disease progression or AIDS. Two reviewers independently extracted data on study characteristics, quality, and estimates of associations. The overall meta-analysis examined 36 articles including 100 psychosocial and disease related relationships. It exhibited a small, but robust positive association between adverse psychosocial factors and HIV progression (correlation coefficient as combined size effect 0.059, 95% confidence interval 0.043-0.074, p<0.001). Notably, sensitivity analyses showed that personality types or coping styles and psychological distress were more strongly associated with greater HIV disease progression than stress stimuli per se, and that all of the immunological and clinical outcome indicators (acquired immunodeficiency syndrome stage, CD4+ T-cell decline, acquired immunodeficiency syndrome diagnosis, acquired immunodeficiency syndrome mortality, and human immunodeficiency virus disease or acquired immunodeficiency syndrome symptoms) except for viral load exhibited detrimental effects by adverse psychosocial factors. In conclusion, the current review reveals a robust relationship between adverse psychosocial factors and HIV disease progression. Furthermore, there would appear to be some evidence for particular psychosocial factors to be most strongly associated with HIV disease progression.
Rosa-Jiménez, Francisco; Rosa-Jiménez, Ascensión; Lozano-Rodríguez, Aquiles; Santoro-Martínez, María Del Carmen; Duro-López, María Del Carmen; Carreras-Álvarez de Cienfuegos, Amelia
To compare the efficacy of the most familiar clinical prediction rules in combination with D-dimer testing to rule out a diagnosis of deep vein thrombosis (DVT) in a hospital emergency department. Retrospective cross-sectional analysis of the case records of all patients attending a hospital emergency department with suspected lower-limb DVT between 1998 and 2002. Ten clinical prediction scores were calculated and D-dimer levels were available for all patients. The gold standard was ultrasound diagnosis of DVT by an independent radiologist who was blinded to clinical records. For each prediction rule, we analyzed the effectiveness of the prediction strategy defined by "low clinical probability and negative D-dimer level" against the ultrasound diagnosis. A total of 861 case records were reviewed and 577 cases were selected; the mean (SD) age was 66.7 (14.2) years. DVT was diagnosed in 145 patients (25.1%). Only the Wells clinical prediction rule and 4 other models had a false negative rate under 2%. The Wells criteria and the score published by Johanning and colleagues identified higher percentages of cases (15.6% and 11.6%, respectively). This study shows that several clinical prediction rules can be safely used in the emergency department, although none of them have proven more effective than the Wells criteria.
Pesquisaje activo de sospechas de reacciones adversas a medicamentos en el Hospital "Dr. Salvador Allende: Primer semestre 2006 Active screening of suspicions of adverse reactions to drugs in "Dr. Salvador Allende" Hospital: First semester 2006
José de Jesús Rego Hernández
Full Text Available Se realizó un estudio descriptivo mediante la revisión diaria del movimiento hospitalario, obtenido en el Departamento de Registros Médicos del Hospital "Dr. Salvador Allende" desde enero hasta junio de 2006 a fin de poder identificar los ingresos susceptibles de ser reacciones adversas a medicamentos, con el objetivo de detectar la frecuencia de ingresos por sospechas de reacciones adversas a medicamentos, así como de caracterizar a estos pacientes. Se llenó una planilla de notificación para cada sospecha de reacción adversa y se analizaron los datos contenidos en las historias clínicas respectivas. Se utilizó una base de Excel diseñada al efecto por el Centro para el Desarrollo de la Farmacoepidemiología. Ingresaron 6 201 pacientes, de los cuales 384 eran susceptibles de tener una reacción adversa a medicamentos (6,2 % y finalmente, se consideraron 57 pacientes cuyo motivo de ingreso se relacionó con algún medicamento (0,9 %. El 66,7 % de los casos eran mayores de 60 años de edad, sin diferencias en cuanto al sexo. Los grupos farmacológicos con mayores afectaciones fueron los analgésicos no opiodes (59,7 % y los antibacterianos (19,4 %. El sangramiento digestivo alto resultó la reacción adversa que más se encontró (57,9 %, con predominio en mayores de 60 años de edad (72,7 %; el ácido acetilsalicílico estuvo presente en el 66,7 % de los casos con este diagnóstico. En el75,5 % de los casos la reacción se consideró como graveA descriptive study was conducted by the daily revision of hospital movement obtained in the Department of Medical Registries of "Dr. Salvador Allende" Hospital from January to June, 2006, aimed at identifying the admissions susceptible to adverse reactions to drugs to detect the frequency of admissions due to suspicions of adverse reactions to drugs, as well as to characterize these patients. A notification form was filled in per each suspicion of adverse reaction, and the data contained in the
Full Text Available Sewunet Admasu Belachew,1 Daniel Asfaw Erku,2 Abebe Basazn Mekuria,3 Begashaw Melaku Gebresillassie1 1Department of Clinical Pharmacy, 2Department of Pharmaceutical Sciences, 3Department of Pharmacology, School of Pharmacy, University of Gondar, Gondar, Ethiopia Background: Adverse drug reactions (ADRs are a global problem and constitute a major clinical problem in terms of human suffering. The high toxicity and narrow therapeutic index of chemotherapeutic agents makes oncology pharmacovigilance essential. The objective of the present study was to assess the pattern of ADRs occurring in cancer patients treated with chemotherapy in a tertiary care teaching hospital in Ethiopia.Methods: A cross-sectional study over a 2-year period from September 2013 to August 2015 was conducted on cancer patients undergoing chemotherapy at Gondar University Referral Hospital Oncology Center. Data were collected directly from patients and their medical case files. The reported ADRs were assessed for causality using the World Health Organization’s causality assessment scale and Naranjo’s algorithm. The severities of the reported reactions were also assessed using National Cancer Institute Common Terminology CTCAE version 4.0. The Pearson’s chi-square test was employed to examine the association between two categorical variables.Results: A total of 815 ADRs were identified per 203 patients included in the study. The most commonly occurring ADRs were nausea and vomiting (18.9%, infections (16.7%, neutropenia (14.7%, fever and/or chills (11.3%, and anemia (9.3%. Platinum compounds (31.4% were the most common group of drugs causing ADRs. Of the reported ADRs, 65.8% were grades 3–4 (severe level, 29.9% were grades 1–2 (mild level, and 4.3% were grade 5 (toxic level. Significant association was found between age, number of chemotherapeutic agents, as well as dose of chemotherapy with the occurrence of grades 3–5 toxicity.Conclusion: The high incidence of
Aguiar Fabio S
Full Text Available Abstract Background Tuberculosis (TB remains a public health issue worldwide. The lack of specific clinical symptoms to diagnose TB makes the correct decision to admit patients to respiratory isolation a difficult task for the clinician. Isolation of patients without the disease is common and increases health costs. Decision models for the diagnosis of TB in patients attending hospitals can increase the quality of care and decrease costs, without the risk of hospital transmission. We present a predictive model for predicting pulmonary TB in hospitalized patients in a high prevalence area in order to contribute to a more rational use of isolation rooms without increasing the risk of transmission. Methods Cross sectional study of patients admitted to CFFH from March 2003 to December 2004. A classification and regression tree (CART model was generated and validated. The area under the ROC curve (AUC, sensitivity, specificity, positive and negative predictive values were used to evaluate the performance of model. Validation of the model was performed with a different sample of patients admitted to the same hospital from January to December 2005. Results We studied 290 patients admitted with clinical suspicion of TB. Diagnosis was confirmed in 26.5% of them. Pulmonary TB was present in 83.7% of the patients with TB (62.3% with positive sputum smear and HIV/AIDS was present in 56.9% of patients. The validated CART model showed sensitivity, specificity, positive predictive value and negative predictive value of 60.00%, 76.16%, 33.33%, and 90.55%, respectively. The AUC was 79.70%. Conclusions The CART model developed for these hospitalized patients with clinical suspicion of TB had fair to good predictive performance for pulmonary TB. The most important variable for prediction of TB diagnosis was chest radiograph results. Prospective validation is still necessary, but our model offer an alternative for decision making in whether to isolate patients with
Chhabra, Kumar G; Sharma, Ashish; Chhabra, Chaya; Reddy, J Jyothirmai; Deolia, Shravani G; Mittal, Yogesh
This is a cross-sectional knowledge, attitude, and practices (KAPs) study on pharmacovigilance (PV) and adverse drug reaction (ADR) reporting among dental students in a teaching hospital in India. The aim of this study was to assess the KAP of dental students regarding PV, ADR reporting, and barriers toward the same. A cross-sectional survey using a self-administered, investigator-developed, close-ended questionnaire was conducted in an academic dental hospital in India. All prescribers including third year students, final year students, and house surgeons of the same institute were included for assessment of KAP regarding PV using 16, 8, and 8 items respectively. Data regarding barriers toward ADR reporting and demographics were also collected. Mann-Whitney U-test and Kruskal-Wallis test were applied followed by post hoc test. A total of 241 of 275 respondents participated in the study with a response rate of 87.5%. Overall, 64% reported that they had no idea about the term PV. Age was significantly associated with knowledge (p = 0.045) and attitude (p = 0.016). Barriers contributing to underreporting were difficulty in deciding whether or not an ADR has occurred (52.0%), concerns that the report may be wrong (37%), lack of confidence to discuss ADR with colleagues (29%), and almost no financial benefits (24%). Participants had a comparatively favorable attitude toward PV, but their knowledge and practice need considerable improvements. This study highlights the need for appropriate dental curriculum changes and further multicentric studies to shed more light on important issues of PV among dentists in India. This study explores dentists' knowledge, attitude, and behavior regarding PV, which could help to improve patient's safety and care. The favorable attitude of dentists is an indication that PV could be added in depth in the curriculum and in general practice. Information on barriers for reporting the ADRs could help to find possible solutions for removing the
Young, Sera; Murray, Katherine; Mwesigwa, Julia; Natureeba, Paul; Osterbauer, Beth; Achan, Jane; Arinaitwe, Emmanuel; Clark, Tamara; Ades, Veronica; Plenty, Albert; Charlebois, Edwin; Ruel, Theodore; Kamya, Moses; Havlir, Diane; Cohan, Deborah
Objective Maternal nutritional status is an important predictor of birth outcomes, yet little is known about the nutritional status of HIV-infected pregnant women treated with combination antiretroviral therapy (cART). We therefore examined the relationship between maternal BMI at study enrollment, gestational weight gain (GWG), and hemoglobin concentration (Hb) among 166 women initiating cART in rural Uganda. Design Prospective cohort. Methods HIV-infected, ART-naïve pregnant women were enrolled between 12 and 28 weeks gestation and treated with a protease inhibitor or non-nucleoside reverse transcriptase inhibitor-based combination regimen. Nutritional status was assessed monthly. Neonatal anthropometry was examined at birth. Outcomes were evaluated using multivariate analysis. Results Mean GWG was 0.17 kg/week, 14.6% of women experienced weight loss during pregnancy, and 44.9% were anemic. Adverse fetal outcomes included low birth weight (LBW) (19.6%), preterm delivery (17.7%), fetal death (3.9%), stunting (21.1%), small-for-gestational age (15.1%), and head-sparing growth restriction (26%). No infants were HIV-infected. Gaining pregnancy, grossly inadequate GWG was common. Infants whose mothers gained <0.1 kg/week were at increased risk for LBW, preterm delivery, and composite adverse birth outcomes. cART by itself may not be sufficient for decreasing the burden of adverse birth outcomes among HIV-infected women. Trial Registration Clinicaltrials.gov NCT00993031 PMID:22879899
Full Text Available Objective: The aim of this study was to assess the predictive value of uterine artery Doppler imaging at 22-24 weeks of gestation for adverse pregnancy outcomes. Materials and Methods: This was a prospective study in which uterine artery Doppler was performed at 22-24 weeks of gestation in 165 pregnant women with singleton pregnancies. A pulsatility index (PI more than 1.45 or bilateral uterine notching was labeled as abnormal Doppler. The pregnancy outcome was assessed in terms of normal outcome, preeclampsia, fetal growth restriction (FGR, low birth weight, spontaneous preterm delivery, oligohydramnios, fetal loss or at least one adverse outcome. Results: Out of 165 patients, 35 (21.2% had abnormal second trimester uterine artery Doppler. In pregnancies that resulted in preeclampsia (PE, (n=21, FGR, (n=21, and low birth weight (n=39, the median uterine artery PI was higher (1.52, 1.41, and 1.27 respectively. In the presence of abnormal Doppler, the risk of PE [OR=10.7, 95% confidence interval (CI: (3.91-29.1; p<0.001], FGR [OR=4.34, 95% CI: (1.62-11.6; p=0.002], low birth weight [OR=6.39, 95% CI: (3.16-12.9; p<0.001] and the risk of at least one obstetric complication [OR=8.73, 95% CI: (3.5-21.3; p<0.001] was significantly high. The positive predictive value of abnormal uterine artery Doppler was highest for preeclampsia (36.84% among all adverse pregnancy outcomes assessed. Conclusion: Uterine artery Doppler ultrasonography at 22-24 weeks of gestation is a significant predictor of at least one adverse pregnancy outcome, with the highest prediction for preeclampsia.
Opio, Martin Otyek; Namujwiga, Teopista; Nakitende, Imaculate
There are few reports of the association of nutritional status with in-hospital mortality of acutely ill medical patients in sub-Saharan Africa. This is a prospective observational study comparing the predictive value of mid-upper arm circumference (MUAC) of 899 acutely ill medical patients...... patients in a resource-poor hospital in sub-Saharan Africa....... admitted to a resource-poor sub-Saharan hospital with mental alertness, mobility and vital signs. Mid-upper arm circumference ranged from 15 cm to 42 cm, and 12 (24%) of the 50 patients with a MUAC less than 20 cm died (OR 4.84, 95% CI 2.23-10.37). Of the 237 patients with a MUAC more than 28 cm only six...
Department of Homeland Security — This database contains locations of Hospitals for 50 states and Washington D.C. , Puerto Rico and US territories. The dataset only includes hospital facilities and...
Full Text Available Introduction: Community acquired pneumonia (CAP may present as life-threatening infection with uncertain progression and outcome of treatment. Primary aim of the trial was determination of the cut-off value of serum interleukin-6 (IL-6 and procalcitonin (PCT above which, 30-day mortality in hospitalized patients with CAP, could be predicted with high sensitivity and specificity. We investigated correlation between serum levels of IL-6 and PCT at admission and available scoring systems of CAP (pneumonia severity index-PSI, modified early warning score-MEWS and (Confusion, Urea nitrogen, respiratory rate, Blood pressure, ≥65 years of age-CURB65. Methods: This was prospective, non-randomized trial which included 101 patients with diagnosed CAP. PSI, MEWS and CURB65 were assessed on first day of hospitalization. IL-6 and PCT were also sampled on the first day of hospitalization. Results: Based on ROC curve analysis (AUC ± SE = 0.934 ± 0.035; 95%CI(0.864-1.0; P = 0.000 hospitalized CAP patients with elevated IL-6 level have 93.4% higher risk level for lethal outcome. Cut-off value of 20.2 pg/ml IL-6 shows sensitivity of 84% and specificity of 87% in mortality prediction. ROC curve analysis confirmed significant role of procalcitonin as a mortality predictor in CAP patients (AUC ± SE = 0.667 ± 0.062; 95%CI(0.546-0.789; P = 0.012. Patients with elevated PCT level have 66.7% higher risk level for lethal outcome. As a predictor of mortality at the cut-off value of 2.56 ng/ml PCT shows sensitivity of 76% and specificity of 61.8%. Conclusions: Both IL-6 and PCI are significant for prediction of 30-day mortality in hospitalized patients with CAP. Serum levels of IL6 correlate with major CAP scoring systems.
Tabak, Ying P; Sun, Xiaowu; Nunez, Carlos M; Gupta, Vikas; Johannes, Richard S
Identifying patients at high risk for readmission early during hospitalization may aid efforts in reducing readmissions. We sought to develop an early readmission risk predictive model using automated clinical data available at hospital admission. We developed an early readmission risk model using a derivation cohort and validated the model with a validation cohort. We used a published Acute Laboratory Risk of Mortality Score as an aggregated measure of clinical severity at admission and the number of hospital discharges in the previous 90 days as a measure of disease progression. We then evaluated the administrative data-enhanced model by adding principal and secondary diagnoses and other variables. We examined the c-statistic change when additional variables were added to the model. There were 1,195,640 adult discharges from 70 hospitals with 39.8% male and the median age of 63 years (first and third quartile: 43, 78). The 30-day readmission rate was 11.9% (n=142,211). The early readmission model yielded a graded relationship of readmission and the Acute Laboratory Risk of Mortality Score and the number of previous discharges within 90 days. The model c-statistic was 0.697 with good calibration. When administrative variables were added to the model, the c-statistic increased to 0.722. Automated clinical data can generate a readmission risk score early at hospitalization with fair discrimination. It may have applied value to aid early care transition. Adding administrative data increases predictive accuracy. The administrative data-enhanced model may be used for hospital comparison and outcome research.
Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del
One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.
Launcelott, Sebastian; Ouzounian, Maral; Buth, Karen J; Légaré, Jean-Francois
The present study generated a risk model and an easy-to-use scorecard for the preoperative prediction of in-hospital mortality for patients undergoing redo cardiac operations. All patients who underwent redo cardiac operations in which the initial and subsequent procedures were performed through a median sternotomy were included. A logistic regression model was created to identify independent preoperative predictors of in-hospital mortality. The results were then used to create a scorecard predicting operative risk. A total of 1,521 patients underwent redo procedures between 1995 and 2010 at a single institution. Coronary bypass procedures were the most common previous (58%) or planned operations (54%). The unadjusted in-hospital mortality for all redo cases was higher than for first-time procedures (9.7% vs. 3.4%; pscorecard was generated using these independent predictors, stratifying patients undergoing redo cardiac operations into 6 risk categories of in-hospital mortality ranging from risk to >40%. Reoperation represents a significant proportion of modern cardiac surgical procedures and is often associated with significantly higher mortality than first-time operations. We created an easy-to-use scorecard to assist clinicians in estimating operative mortality to ensure optimal decision making in the care of patients facing redo cardiac operations. Copyright © 2012 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Clarke, Damian L; Furlong, Heidi; Laing, Grant L; Aldous, Colleen; Thomson, Sandie Rutherford
Several authors have suggested that the traditional surgical morbidity and mortality meeting be developed as a tool to identify surgical errors and turn them into learning opportunities for staff. We report our experience with these meetings. A structured template was developed for each morbidity and mortality meeting. We used a grid to analyse mortality and classify the death as: (i) death expected/death unexpected; and (ii) death unpreventable/death preventable. Individual cases were then analysed using a combination of error taxonomies. During the period June - December 2011, a total of 400 acute admissions (195 trauma and 205 non-trauma) were managed at Edendale Hospital, Pietermaritzburg, South Africa. During this period, 20 morbidity and mortality meetings were held, at which 30 patients were discussed. There were 10 deaths, of which 5 were unexpected and potentially avoidable. A total of 43 errors were recognised, all in the domain of the acute admissions ward. There were 33 assessment failures, 5 logistical failures, 5 resuscitation failures, 16 errors of execution and 27 errors of planning. Seven patients experienced a number of errors, of whom 5 died. Error theory successfully dissected out the contribution of error to adverse events in our institution. Translating this insight into effective strategies to reduce the incidence of error remains a challenge. Using the examples of error identified at the meetings as educational cases may help with initiatives that directly target human error in trauma care.
Tadesse Melaku Abegaz
Full Text Available Background. There is paucity of data on prevalence of Adverse Drug Reactions (ADRs and adherence and clinical outcomes of antidepressants. The present study determined the magnitude of ADRs of antidepressants and their impact on the level of adherence and clinical outcome. Methods. A prospective cross-sectional study was conducted among depression patients from September 2016 to January 2017 at Gondar University Hospital psychiatry clinic. The Naranjo ADR probability scale was employed to assess the ADRs. The rate of medication adherence was determined using Morisky Medication Adherence Measurement Scale-Eight. Results. Two hundred seventeen patients participated in the study, more than half of them being males (122; 56.2%. More than one-half of the subjects had low adherence to their medications (124; 57.1% and about 186 (85.7% of the patients encountered ADR. The most common ADR was weight gain (29; 13.2%. More than one-half (125; 57.6% of the respondents showed improved clinical outcome. Optimal level of medication adherence decreased the likelihood of poor clinical outcome by 56.8%. Conclusion. ADRs were more prevalent. However, adherence to medications was very poor in the setup. Long duration of depression negatively affects the rate of adherence. In addition, adherence was found to influence the clinical outcome of depression patients.
Can adverse maternal and perinatal outcomes be predicted when blood pressure becomes elevated? Secondary analyses from the CHIPS (Control of Hypertension In Pregnancy Study) randomized controlled trial.
Magee, Laura A; von Dadelszen, Peter; Singer, Joel; Lee, Terry; Rey, Evelyne; Ross, Susan; Asztalos, Elizabeth; Murphy, Kellie E; Menzies, Jennifer; Sanchez, Johanna; Gafni, Amiram; Gruslin, Andrée; Helewa, Michael; Hutton, Eileen; Lee, Shoo K; Logan, Alexander G; Ganzevoort, Wessel; Welch, Ross; Thornton, Jim G; Moutquin, Jean Marie
For women with chronic or gestational hypertension in CHIPS (Control of Hypertension In Pregnancy Study, NCT01192412), we aimed to examine whether clinical predictors collected at randomization could predict adverse outcomes. This was a planned, secondary analysis of data from the 987 women in the CHIPS Trial. Logistic regression was used to examine the impact of 19 candidate predictors on the probability of adverse perinatal (pregnancy loss or high level neonatal care for >48 h, or birthweight hypertension, preeclampsia, or delivery at blood pressure within 1 week before randomization. Continuous variables were represented continuously or dichotomized based on the smaller p-value in univariate analyses. An area-under-the-receiver-operating-curve (AUC ROC) of ≥0.70 was taken to reflect a potentially useful model. Point estimates for AUC ROC were hypertension (0.70, 95% CI 0.67-0.74) and delivery at hypertension develop an elevated blood pressure in pregnancy, or formerly normotensive women develop new gestational hypertension, maternal and current pregnancy clinical characteristics cannot predict adverse outcomes in the index pregnancy. © 2016 The Authors. Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).
Bruno Prata Martinez
Full Text Available OBJECTIVES: The ability of the Timed Up and Go test to predict sarcopenia has not been evaluated previously. The objective of this study was to evaluate the accuracy of the Timed Up and Go test for predicting sarcopenia in elderly hospitalized patients. METHODS: This cross-sectional study analyzed 68 elderly patients (≥60 years of age in a private hospital in the city of Salvador-BA, Brazil, between the 1st and 5th day of hospitalization. The predictive variable was the Timed Up and Go test score, and the outcome of interest was the presence of sarcopenia (reduced muscle mass associated with a reduction in handgrip strength and/or weak physical performance in a 6-m gait-speed test. After the descriptive data analyses, the sensitivity, specificity and accuracy of a test using the predictive variable to predict the presence of sarcopenia were calculated. RESULTS: In total, 68 elderly individuals, with a mean age 70.4±7.7 years, were evaluated. The subjects had a Charlson Comorbidity Index score of 5.35±1.97. Most (64.7% of the subjects had a clinical admission profile; the main reasons for hospitalization were cardiovascular disorders (22.1%, pneumonia (19.1% and abdominal disorders (10.2%. The frequency of sarcopenia in the sample was 22.1%, and the mean length of time spent performing the Timed Up and Go test was 10.02±5.38 s. A time longer than or equal to a cutoff of 10.85 s on the Timed Up and Go test predicted sarcopenia with a sensitivity of 67% and a specificity of 88.7%. The accuracy of this cutoff for the Timed Up and Go test was good (0.80; IC=0.66-0.94; p=0.002. CONCLUSION: The Timed Up and Go test was shown to be a predictor of sarcopenia in elderly hospitalized patients.
Thóra Hafsteinsdóttir; Roelof G.A. Ettema; Diederick Grobbee; Prof. Dr. Marieke J. Schuurmans; Janneke van Man-van Ginkel; Eline Lindeman
Background and Purpose—The timely detection of post-stroke depression is complicated by a decreasing length of hospital stay. Therefore, the Post-stroke Depression Prediction Scale was developed and validated. The Post-stroke Depression Prediction Scale is a clinical prediction model for the early
Awad, Aya; Bader-El-Den, Mohamed; McNicholas, James; Briggs, Jim
Mortality prediction of hospitalized patients is an important problem. Over the past few decades, several severity scoring systems and machine learning mortality prediction models have been developed for predicting hospital mortality. By contrast, early mortality prediction for intensive care unit patients remains an open challenge. Most research has focused on severity of illness scoring systems or data mining (DM) models designed for risk estimation at least 24 or 48h after ICU admission. This study highlights the main data challenges in early mortality prediction in ICU patients and introduces a new machine learning based framework for Early Mortality Prediction for Intensive Care Unit patients (EMPICU). The proposed method is evaluated on the Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database. Mortality prediction models are developed for patients at the age of 16 or above in Medical ICU (MICU), Surgical ICU (SICU) or Cardiac Surgery Recovery Unit (CSRU). We employ the ensemble learning Random Forest (RF), the predictive Decision Trees (DT), the probabilistic Naive Bayes (NB) and the rule-based Projective Adaptive Resonance Theory (PART) models. The primary outcome was hospital mortality. The explanatory variables included demographic, physiological, vital signs and laboratory test variables. Performance measures were calculated using cross-validated area under the receiver operating characteristic curve (AUROC) to minimize bias. 11,722 patients with single ICU stays are considered. Only patients at the age of 16 years old and above in Medical ICU (MICU), Surgical ICU (SICU) or Cardiac Surgery Recovery Unit (CSRU) are considered in this study. The proposed EMPICU framework outperformed standard scoring systems (SOFA, SAPS-I, APACHE-II, NEWS and qSOFA) in terms of AUROC and time (i.e. at 6h compared to 48h or more after admission). The results show that although there are many values missing in the first few hour of ICU admission
Leela K Lella
Full Text Available The significance of right ventricular ejection fraction (RVEF, independent of left ventricular ejection fraction (LVEF, following isolated coronary artery bypass grafting (CABG and valve procedures remains unknown. The aim of this study is to examine the significance of abnormal RVEF by cardiac magnetic resonance (CMR, independent of LVEF in predicting outcomes of patients undergoing isolated CABG and valve surgery.From 2007 to 2009, 109 consecutive patients (mean age, 66 years; 38% female were referred for pre-operative CMR. Abnormal RVEF and LVEF were considered 30 days outcomes included, cardiac re-hospitalization, worsening congestive heart failure and mortality. Mean clinical follow up was 14 months.Forty-eight patients had reduced RVEF (mean 25% and 61 patients had normal RVEF (mean 50% (p<0.001. Fifty-four patients had reduced LVEF (mean 30% and 55 patients had normal LVEF (mean 59% (p<0.001. Patients with reduced RVEF had a higher incidence of long-term cardiac re-hospitalization vs. patients with normal RVEF (31% vs.13%, p<0.05. Abnormal RVEF was a predictor for long-term cardiac re-hospitalization (HR 3.01 [CI 1.5-7.9], p<0.03. Reduced LVEF did not influence long-term cardiac re-hospitalization.Abnormal RVEF is a stronger predictor for long-term cardiac re-hospitalization than abnormal LVEF in patients undergoing isolated CABG and valve procedures.
Suchsland, Till; Aghdassi, Ali; Kühn, Kristina; Simon, Peter; Lerch, Markus M; Mayerle, Julia; Flessa, Steffen
Acute and chronic pancreatitis are common gastroenterological disorders that have a fairly unpredictable long-term course often associated with unplanned hospital readmissions. Little is known about the factors that increase or decrease the risk for a hospital readmission. The aim of this study was to identify positive and negative predictive factors for hospital readmissions of patients with acute and chronic pancreatitis after in-hospital treatment. In a retrospective analysis data from the hospital information and reimbursement data system (HIS) were evaluated for 606 hospital stays for either acute or chronic pancreatitis between 2006 and 2011. Additional clinical data were obtained from a questionnaire covering quality of life and socio-economic status. A total of 973 patient variables were assessed by bivariate and multivariate analysis. Between 2006 and 2011, 373 patients were admitted for acute or chronic pancreatitis; 107 patients of them were readmitted and 266 had only one hospitalization. Predictors for readmission were concomitant liver disease, presence of a pseudocyst or a suspected tumor of the pancreas as well as alcohol, tobacco or substance abuse or coexisting mental disorders. Patients who had undergone a CT-scan were more susceptible to readmission. Lower readmissions rates were found in patients with diabetes mellitus or gallstone disease as co-morbidity. While factors like age and severity of the initial disease cannot be influenced to reduce the readmission rate for pancreatitis, variables like alcohol, tobacco and drug abuse can be addressed in outpatient programs to reduce disease recurrence and readmission rates for pancreatitis. Copyright © 2015 IAP and EPC. Published by Elsevier B.V. All rights reserved.
Maclean, Miriam J; Taylor, Catherine L; O'Donnell, Melissa
Maltreatment largely occurs in a multiple-risk context. The few large studies adjusting for confounding factors have raised doubts about whether low educational achievement results from maltreatment or co-occurring risk factors. This study examined prevalence, risk and protective factors for low educational achievement among children involved with the child protection system compared to other children. We conducted a population-based record-linkage study of children born in Western Australia who sat national Year 3 reading achievement tests between 2008 and 2010 (N=46,838). The longitudinal study linked data from the Western Australian Department of Education, Department of Child Protection and Family Support, Department of Health, and the Disability Services Commission. Children with histories of child protection involvement (unsubstantiated maltreatment reports, substantiations or out-of-home care placement) were at three-fold increased risk of low reading scores. Adjusting for socio-demographic adversity partially attenuated the increased risk, however risk remained elevated overall and for substantiated (OR=1.68) and unsubstantiated maltreatment (OR=1.55). Risk of low reading scores in the out-of-home care group was fully attenuated after adjusting for socio-demographic adversity (OR=1.16). Attendance was significantly higher in the out-of-home care group and served a protective role. Neglect, sexual abuse, and physical abuse were associated with low reading scores. Pre-existing adversity was also significantly associated with achievement. Results support policies and practices to engage children and families in regular school attendance, and highlight a need for further strategies to prevent maltreatment and disadvantage from restricting children's opportunities for success. Copyright © 2015 Elsevier Ltd. All rights reserved.
The effect of an active on-ward participation of hospital pharmacists in Internal Medicine teams on preventable Adverse Drug Events in elderly inpatients: protocol of the WINGS study (Ward-oriented pharmacy in newly admitted geriatric seniors)
Klopotowska, J.E.; Wierenga, P.C.; de Rooij, S.E.; Stuijt, C.C.; Arisz, L.; Kuks, P.F.; Dijkgraaf, M.G.; Lie-A-Huen, L.; Smorenburg, S.M.
The potential of clinical interventions, aiming at reduction of preventable Adverse Drug Events (preventable ADEs) during hospital stay, have been studied extensively. Clinical Pharmacy is a well-established and effective service, usually consisting of full-time on-ward participation of clinical
Chouinard, Maud-Christine; Robichaud-Ekstrand, Sylvie
Several authors have questioned the transtheoretical model. Determining the predictive value of each cognitive-behavioural element within this model could explain the multiple successes reported in smoking cessation programmes. The purpose of this study was to predict point-prevalent smoking abstinence at 2 and 6 months, using the constructs of the transtheoretical model, when applied to a pooled sample of individuals who were hospitalized for a cardiovascular event. The study follows a predictive correlation design. Recently hospitalized patients (n=168) with cardiovascular disease were pooled from a randomized, controlled trial. Independent variables of the predictive transtheoretical model comprise stages and processes of change, pros and cons to quit smoking (decisional balance), self-efficacy, and social support. These were evaluated at baseline, 2 and 6 months. Compared to smokers, individuals who abstained from smoking at 2 and 6 months were more confident at baseline to remain non-smokers, perceived less pros and cons to continue smoking, utilized less consciousness raising and self-re-evaluation experiential processes of change, and received more positive reinforcement from their social network with regard to their smoke-free behaviour. Self-efficacy and stages of change at baseline were predictive of smoking abstinence after 6 months. Other variables found to be predictive of smoking abstinence at 6 months were an increase in self-efficacy; an increase in positive social support behaviour and a decrease of the pros within the decisional balance. The results partially support the predictive value of the transtheoretical model constructs in smoking cessation for cardiovascular disease patients.
Scott B Hu
Full Text Available Clinical deterioration (ICU transfer and cardiac arrest occurs during approximately 5-10% of hospital admissions. Existing prediction models have a high false positive rate, leading to multiple false alarms and alarm fatigue. We used routine vital signs and laboratory values obtained from the electronic medical record (EMR along with a machine learning algorithm called a neural network to develop a prediction model that would increase the predictive accuracy and decrease false alarm rates.Retrospective cohort study.The hematologic malignancy unit in an academic medical center in the United States.Adult patients admitted to the hematologic malignancy unit from 2009 to 2010.None.Vital signs and laboratory values were obtained from the electronic medical record system and then used as predictors (features. A neural network was used to build a model to predict clinical deterioration events (ICU transfer and cardiac arrest. The performance of the neural network model was compared to the VitalPac Early Warning Score (ViEWS. Five hundred sixty five consecutive total admissions were available with 43 admissions resulting in clinical deterioration. Using simulation, the neural network outperformed the ViEWS model with a positive predictive value of 82% compared to 24%, respectively.We developed and tested a neural network-based prediction model for clinical deterioration in patients hospitalized in the hematologic malignancy unit. Our neural network model outperformed an existing model, substantially increasing the positive predictive value, allowing the clinician to be confident in the alarm raised. This system can be readily implemented in a real-time fashion in existing EMR systems.
De Marco, Maria Francesca; Lorenzoni, Luca; Addari, Piero; Nante, Nicola
Inpatient mortality has increasingly been used as an hospital outcome measure. Comparing mortality rates across hospitals requires adjustment for patient risks before making inferences about quality of care based on patient outcomes. Therefore it is essential to dispose of well performing severity measures. The aim of this study is to evaluate the ability of the All Patient Refined DRG system to predict inpatient mortality for congestive heart failure, myocardial infarction, pneumonia and ischemic stroke. Administrative records were used in this analysis. We used two statistics methods to assess the ability of the APR-DRG to predict mortality: the area under the receiver operating characteristics curve (referred to as the c-statistic) and the Hosmer-Lemeshow test. The database for the study included 19,212 discharges for stroke, pneumonia, myocardial infarction and congestive heart failure from fifteen hospital participating in the Italian APR-DRG Project. A multivariate analysis was performed to predict mortality for each condition in study using age, sex and APR-DRG risk mortality subclass as independent variables. Inpatient mortality rate ranges from 9.7% (pneumonia) to 16.7% (stroke). Model discrimination, calculated using the c-statistic, was 0.91 for myocardial infarction, 0.68 for stroke, 0.78 for pneumonia and 0.71 for congestive heart failure. The model calibration assessed using the Hosmer-Leme-show test was quite good. The performance of the APR-DRG scheme when used on Italian hospital activity records is similar to that reported in literature and it seems to improve by adding age and sex to the model. The APR-DRG system does not completely capture the effects of these variables. In some cases, the better performance might be due to the inclusion of specific complications in the risk-of-mortality subclass assignment.
Kassomenos, P.; Papaloukas, C.; Petrakis, M.; Karakitsios, S.
The contribution of air pollution on hospital admissions due to respiratory and heart diseases is a major issue in the health-environmental perspective. In the present study, an attempt was made to run down the relationships between air pollution levels and meteorological indexes, and corresponding hospital admissions in Athens, Greece. The available data referred to a period of eight years (1992-2000) including the daily number of hospital admissions due to respiratory and heart diseases, hourly mean concentrations of CO, NO 2, SO 2, O 3 and particulates in several monitoring stations, as well as, meteorological data (temperature, relative humidity, wind speed/direction). The relations among the above data were studied through widely used statistical techniques (multivariate stepwise analyses) and Artificial Neural Networks (ANNs). Both techniques revealed that elevated particulate concentrations are the dominant parameter related to hospital admissions (an increase of 10 μg m -3 leads to an increase of 10.2% in the number of admissions), followed by O 3 and the rest of the pollutants (CO, NO 2 and SO 2). Meteorological parameters also play a decisive role in the formation of air pollutant levels affecting public health. Consequently, increased/decreased daily hospital admissions are related to specific types of meteorological conditions that favor/do not favor the accumulation of pollutants in an urban complex. In general, the role of meteorological factors seems to be underestimated by stepwise analyses, while ANNs attribute to them a more important role. Comparison of the two models revealed that ANN adaptation in complicate environmental issues presents improved modeling results compared to a regression technique. Furthermore, the ANN technique provides a reliable model for the prediction of the daily hospital admissions based on air quality data and meteorological indices, undoubtedly useful for regulatory purposes.
Full Text Available Despite advances in perinatal care, the outcome of newborns with hypoxic-ischemic encephalopathy is poor and the issue still remains challenging in neonatology. The use of an easily approachable and practical biomarker not only could identify neonates with severe brain damage and subsequent adverse outcome, but could also target the group of infants that would benefit from a neuroprotective intervention. Recent studies have suggested interleukin-1b, interleukin-6, tumour necrosis alpha (TNF-a and neuron specific enolase (NSE to be potential biomarkers of brain damage in asphyxiated newborns. S100B, lactate dehydrogenase, nitrated albumin-nitrotyrosine, adrenomedullin, activin-A, non protein bound iron, isoprostanes, vascular endothelial growth factor and metalloproteinases have also been proposed by single-centre studies to play a similar role in the field. With this review we aim to provide an overview of existing data in the literature regarding biomarkers for neonatal brain damage.
Arnaoutakis, George J.; George, Timothy J.; Alejo, Diane E.; Merlo, Christian A.; Baumgartner, William A.; Cameron, Duke E.; Shah, Ashish S.
Context The impact of Society of Thoracic Surgeons (STS) predicted mortality risk score on resource utilization after aortic valve replacement (AVR) has not been previously studied. Objective We hypothesize that increasing STS risk scores in patients having AVR are associated with greater hospital charges. Design, Setting, and Patients Clinical and financial data for patients undergoing AVR at a tertiary care, university hospital over a ten-year period (1/2000–12/2009) were retrospectively reviewed. The current STS formula (v2.61) for in-hospital mortality was used for all patients. After stratification into risk quartiles (Q), index admission hospital charges were compared across risk strata with Rank-Sum tests. Linear regression and Spearman’s coefficient assessed correlation and goodness of fit. Multivariable analysis assessed relative contributions of individual variables on overall charges. Main Outcome Measures Inflation-adjusted index hospitalization total charges Results 553 patients had AVR during the study period. Average predicted mortality was 2.9% (±3.4) and actual mortality was 3.4% for AVR. Median charges were greater in the upper Q of AVR patients [Q1–3,$39,949 (IQR32,708–51,323) vs Q4,$62,301 (IQR45,952–97,103), p=<0.01]. On univariate linear regression, there was a positive correlation between STS risk score and log-transformed charges (coefficient: 0.06, 95%CI 0.05–0.07, p<0.01). Spearman’s correlation R-value was 0.51. This positive correlation persisted in risk-adjusted multivariable linear regression. Each 1% increase in STS risk score was associated with an added $3,000 in hospital charges. Conclusions This study showed increasing STS risk score predicts greater charges after AVR. As competing therapies such as percutaneous valve replacement emerge to treat high risk patients, these results serve as a benchmark to compare resource utilization. PMID:21497834
Full Text Available BACKGROUND: Hospital-acquired infections (HAI are associated with increased attributable morbidity, mortality, prolonged hospitalization, and economic costs. A simple, reliable prediction model for HAI has great clinical relevance. The objective of this study is to develop a scoring system to predict HAI that was derived from Logistic Regression (LR and validated by Artificial Neural Networks (ANN simultaneously. METHODOLOGY/PRINCIPAL FINDINGS: A total of 476 patients from all the 806 HAI inpatients were included for the study between 2004 and 2005. A sample of 1,376 non-HAI inpatients was randomly drawn from all the admitted patients in the same period of time as the control group. External validation of 2,500 patients was abstracted from another academic teaching center. Sixteen variables were extracted from the Electronic Health Records (EHR and fed into ANN and LR models. With stepwise selection, the following seven variables were identified by LR models as statistically significant: Foley catheterization, central venous catheterization, arterial line, nasogastric tube, hemodialysis, stress ulcer prophylaxes and systemic glucocorticosteroids. Both ANN and LR models displayed excellent discrimination (area under the receiver operating characteristic curve [AUC]: 0.964 versus 0.969, p = 0.507 to identify infection in internal validation. During external validation, high AUC was obtained from both models (AUC: 0.850 versus 0.870, p = 0.447. The scoring system also performed extremely well in the internal (AUC: 0.965 and external (AUC: 0.871 validations. CONCLUSIONS: We developed a scoring system to predict HAI with simple parameters validated with ANN and LR models. Armed with this scoring system, infectious disease specialists can more efficiently identify patients at high risk for HAI during hospitalization. Further, using parameters either by observation of medical devices used or data obtained from EHR also provided good prediction
Retrospective analysis of non-laboratory-based adverse drug reactions induced by intravenous radiocontrast agents in a Joint Commission International-accredited academic medical center hospital in China
Full Text Available Qin-lan Chen,1 Xiao-ying Zhao,2 Xiao-mi Wang,1 Na Lv,2 Ling-ling Zhu,3 Hui-min Xu,4 Quan Zhou4 1Radiology Nursing Unit, Division of Nursing, 2Department of Quality Management, 3Geriatric VIP Care Ward, Division of Nursing, 4Department of Pharmacy, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, People’s Republic of China Abstract: The authors retrospectively analyzed the pattern and characteristics of non-laboratory-based adverse drug reactions (ADRs induced by intravenous radiocontrast agents in a large-scale hospital in China during 2014–2015. There were 314 ADR cases among 118,208 patients receiving enhanced CT or MRI examinations. The frequency of moderate/severe ADRs defined by Chinese Society of Radiology (ie, severe vomiting, systematic urticaria, facial swelling, dyspnea, vasovagal reaction, laryngeal edema, seizure, trembling, convulsions, unconsciousness, shock, death, and other unexpected adverse reactions was rare (0.0431%, whereas the mild ADRs were uncommon (0.2225% and accounted for 83.76% of ADRs. Frequency of ADRs induced by iodinated contrast agents was related with examination site, sex, and type of patient settings (P<0.01 and was higher compared with gadolinium contrast agents (0.3676% vs 0.0504%, P<0.01. From 2014 to 2015, frequencies of total and moderate/severe ADRs induced by iodinated contrast agents decreased significantly (0.4410% vs 0.2947%, P<0.01; 0.0960% vs 0.0282%, P<0.01, respectively. Frequency of ADRs differed among different iodinated contrast and gadolinium contrast (P<0.05 agents. Iopromide’s ADR frequency in 2014 was significantly higher compared with iopamidol, ioversol, or iohexol (P<0.01. Frequency of moderate/severe ADRs induced by iodixanol was 4.1–5.4 times that of iohexol, iopromide, or iopamidol. Rash was the predominant ADR subtype (84.39% and occurred more frequently with iodixanol compared with iohexol, iopamidol, or ioversol (P<0
Jakimov, Tamara; Mrdović, Igor; Filipović, Branka; Zdravković, Marija; Djoković, Aleksandra; Hinić, Saša; Milić, Nataša; Filipović, Branislav
To compare the prognostic performance of three major risk scoring systems including global registry for acute coronary events (GRACE), thrombolysis in myocardial infarction (TIMI), and prediction of 30-day major adverse cardiovascular events after primary percutaneous coronary intervention (RISK-PCI). This single-center retrospective study involved 200 patients with acute coronary syndrome (ACS) who underwent invasive diagnostic approach, ie, coronary angiography and myocardial revascularization if appropriate, in the period from January 2014 to July 2014. The GRACE, TIMI, and RISK-PCI risk scores were compared for their predictive ability. The primary endpoint was a composite 30-day major adverse cardiovascular event (MACE), which included death, urgent target-vessel revascularization (TVR), stroke, and non-fatal recurrent myocardial infarction (REMI). The c-statistics of the tested scores for 30-day MACE or area under the receiver operating characteristic curve (AUC) with confidence intervals (CI) were as follows: RISK-PCI (AUC=0.94; 95% CI 1.790-4.353), the GRACE score on admission (AUC=0.73; 95% CI 1.013-1.045), the GRACE score on discharge (AUC=0.65; 95% CI 0.999-1.033). The RISK-PCI score was the only score that could predict TVR (AUC=0.91; 95% CI 1.392-2.882). The RISK-PCI scoring system showed an excellent discriminative potential for 30-day death (AUC=0.96; 95% CI 1.339-3.548) in comparison with the GRACE scores on admission (AUC=0.88; 95% CI 1.018-1.072) and on discharge (AUC=0.78; 95% CI 1.000-1.058). In comparison with the GRACE and TIMI scores, RISK-PCI score showed a non-inferior ability to predict 30-day MACE and death in ACS patients. Moreover, RISK-PCI was the only scoring system that could predict recurrent ischemia requiring TVR.
Harrison, David A; Patel, Krishna; Nixon, Edel; Soar, Jasmeet; Smith, Gary B; Gwinnutt, Carl; Nolan, Jerry P; Rowan, Kathryn M
The National Cardiac Arrest Audit (NCAA) is the UK national clinical audit for in-hospital cardiac arrest. To make fair comparisons among health care providers, clinical indicators require case mix adjustment using a validated risk model. The aim of this study was to develop and validate risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team in UK hospitals. Risk models for two outcomes-return of spontaneous circulation (ROSC) for greater than 20min and survival to hospital discharge-were developed and validated using data for in-hospital cardiac arrests between April 2011 and March 2013. For each outcome, a full model was fitted and then simplified by testing for non-linearity, combining categories and stepwise reduction. Finally, interactions between predictors were considered. Models were assessed for discrimination, calibration and accuracy. 22,479 in-hospital cardiac arrests in 143 hospitals were included (14,688 development, 7791 validation). The final risk model for ROSC>20min included: age (non-linear), sex, prior length of stay in hospital, reason for attendance, location of arrest, presenting rhythm, and interactions between presenting rhythm and location of arrest. The model for hospital survival included the same predictors, excluding sex. Both models had acceptable performance across the range of measures, although discrimination for hospital mortality exceeded that for ROSC>20min (c index 0.81 versus 0.72). Validated risk models for ROSC>20min and hospital survival following in-hospital cardiac arrest have been developed. These models will strengthen comparative reporting in NCAA and support local quality improvement. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Harrison, David A.; Patel, Krishna; Nixon, Edel; Soar, Jasmeet; Smith, Gary B.; Gwinnutt, Carl; Nolan, Jerry P.; Rowan, Kathryn M.
Aim The National Cardiac Arrest Audit (NCAA) is the UK national clinical audit for in-hospital cardiac arrest. To make fair comparisons among health care providers, clinical indicators require case mix adjustment using a validated risk model. The aim of this study was to develop and validate risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team in UK hospitals. Methods Risk models for two outcomes—return of spontaneous circulation (ROSC) for greater than 20 min and survival to hospital discharge—were developed and validated using data for in-hospital cardiac arrests between April 2011 and March 2013. For each outcome, a full model was fitted and then simplified by testing for non-linearity, combining categories and stepwise reduction. Finally, interactions between predictors were considered. Models were assessed for discrimination, calibration and accuracy. Results 22,479 in-hospital cardiac arrests in 143 hospitals were included (14,688 development, 7791 validation). The final risk model for ROSC > 20 min included: age (non-linear), sex, prior length of stay in hospital, reason for attendance, location of arrest, presenting rhythm, and interactions between presenting rhythm and location of arrest. The model for hospital survival included the same predictors, excluding sex. Both models had acceptable performance across the range of measures, although discrimination for hospital mortality exceeded that for ROSC > 20 min (c index 0.81 versus 0.72). Conclusions Validated risk models for ROSC > 20 min and hospital survival following in-hospital cardiac arrest have been developed. These models will strengthen comparative reporting in NCAA and support local quality improvement. PMID:24830872
van Tilburg, Wijnand A P; Sedikides, Constantine; Wildschut, Tim
Four studies examined the link between adverse weather and the palliative role of nostalgia. We proposed and tested that (a) adverse weather evokes nostalgia (Hypothesis 1); (b) adverse weather causes distress, which predicts elevated nostalgia (Hypothesis 2); (c) preventing nostalgia exacerbates weather-induced distress (Hypothesis 3); and (d) weather-evoked nostalgia confers psychological benefits (Hypothesis 4). In Study 1, participants listened to recordings of wind, thunder, rain, and neutral sounds. Adverse weather evoked nostalgia. In Study 2, participants kept a 10-day diary recording weather conditions, distress, and nostalgia. We also obtained meteorological data. Adverse weather perceptions were positively correlated with distress, which predicted higher nostalgia. Also, adverse natural weather was associated with corresponding weather perceptions, which predicted elevated nostalgia. (Results were mixed for rain.) In Study 3, preventing nostalgia (via cognitive load) increased weather-evoked distress. In Study 4, weather-evoked nostalgia was positively associated with psychological benefits. The findings pioneer the relevance of nostalgia as source of comfort in adverse weather.
Thangaratinam, Shakila; Koopmans, Corine M.; Iyengar, Shalini; Zamora, Javier; Ismail, Khaled M. K.; Mol, Ben W. J.; Khan, Khalid S.
Background. Liver function tests are routinely performed in women as part of a battery of investigations to assess severity at admission and later to guide appropriate management. Objective. To determine the accuracy with which liver function tests predict complications in women with preeclampsia by
Thangaratinam, Shakila; Koopmans, Corine M.; Iyengar, Shalini; Zamora, Javier; Ismail, Khaled M. K.; Mol, Ben W. J.; Khan, Khalid S.
Background. Liver function tests are routinely performed in women as part of a battery of investigations to assess severity at admission and later to guide appropriate management. Objective. To determine the accuracy with which liver function tests predict complications in women with preeclampsia by
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumu...
Ye, Zengpanpan; Ai, Xiaolin; Zheng, Jun; Hu, Xin; You, Chao; Andrew M, Faramand; Fang, Fang
The spot sign is a highly specific and sensitive predictor of hematoma expansion in following primary intracerebral hemorrhage (ICH). Rare cases of the spot sign have been documented in patients with intracranial hemorrhage secondary to arteriovenous malformation (AVM). The purpose of this retrospective study is to assess the accuracy of spot sign in predicting clinical outcomes in patients with ruptured AVM. A retrospective analysis of a prospectively maintained database was performed for patients who presented to West China Hospital with ICH secondary to AVM in the period between January 2009 and September 2016. Two radiologists blinded to the clinical data independently assessed the imaging data, including the presence of spot sign. Statistical analysis using univariate testing, multivariate logistic regression testing, and receiver operating characteristic curve (AUC) analysis was performed. A total of 116 patients were included. Overall, 18.9% (22/116) of subjects had at least 1 spot sign detected by CT angiography, 7% (8/116) died in hospital, and 27% (31/116) of the patients had a poor outcome after 90 days. The spot sign had a sensitivity of 62.5% and specificity of 84.3% for predicting in-hospital mortality (p = .02, AUC 0.734). No correlation detected between the spot sign and 90-day outcomes under multiple logistic regression (p = .19). The spot sign is an independent predictor for in-hospital mortality. The presence of spot sign did not correlate with the 90 day outcomes in this patient cohort. The results of this report suggest that patients with ruptured AVM with demonstrated the spot sign on imaging must receive aggressive treatment early on due to the high risk of mortality.
Gideon O Emukule
Full Text Available BACKGROUND: Pediatric respiratory disease is a major cause of morbidity and mortality in the developing world. We evaluated a modified respiratory index of severity in children (mRISC scoring system as a standard tool to identify children at greater risk of death from respiratory illness in Kenya. MATERIALS AND METHODS: We analyzed data from children <5 years old who were hospitalized with respiratory illness at Siaya District Hospital from 2009-2012. We used a multivariable logistic regression model to identify patient characteristics predictive for in-hospital mortality. Model discrimination was evaluated using the concordance statistic. Using bootstrap samples, we re-estimated the coefficients and the optimism of the model. The mRISC score for each child was developed by adding up the points assigned to each factor associated with mortality based on the coefficients in the multivariable model. RESULTS: We analyzed data from 3,581 children hospitalized with respiratory illness; including 218 (6% who died. Low weight-for-age [adjusted odds ratio (aOR = 2.1; 95% CI 1.3-3.2], very low weight-for-age (aOR = 3.8; 95% CI 2.7-5.4, caretaker-reported history of unconsciousness (aOR = 2.3; 95% CI 1.6-3.4, inability to drink or breastfeed (aOR = 1.8; 95% CI 1.2-2.8, chest wall in-drawing (aOR = 2.2; 95% CI 1.5-3.1, and being not fully conscious on physical exam (aOR = 8.0; 95% CI 5.1-12.6 were independently associated with mortality. The positive predictive value for mortality increased with increasing mRISC scores. CONCLUSIONS: A modified RISC scoring system based on a set of easily measurable clinical features at admission was able to identify children at greater risk of death from respiratory illness in Kenya.
Recent developments in digital technology have facilitated the recording and retrieval of administrative data from multiple sources about children and their families. Combined with new ways to mine such data using algorithms which can 'learn', it has been claimed that it is possible to develop tools that can predict which individual children within a population are most likely to be maltreated. The proposed benefit is that interventions can then be targeted to the most vulnerable children and their families to prevent maltreatment from occurring. As expertise in predictive modelling increases, the approach may also be applied in other areas of social work to predict and prevent adverse outcomes for vulnerable service users. In this article, a glimpse inside the 'black box' of predictive tools is provided to demonstrate how their development for use in social work may not be straightforward, given the nature of the data recorded about service users and service activity. The development of predictive risk modelling (PRM) in New Zealand is focused on as an example as it may be the first such tool to be applied as part of ongoing reforms to child protection services.
Background Limited information is available about predictors of short-term outcomes in patients with exacerbation of chronic obstructive pulmonary disease (eCOPD) attending an emergency department (ED). Such information could help stratify these patients and guide medical decision-making. The aim of this study was to develop a clinical prediction rule for short-term mortality during hospital admission or within a week after the index ED visit. Methods This was a prospective cohort study of patients with eCOPD attending the EDs of 16 participating hospitals. Recruitment started in June 2008 and ended in September 2010. Information on possible predictor variables was recorded during the time the patient was evaluated in the ED, at the time a decision was made to admit the patient to the hospital or discharge home, and during follow-up. Main short-term outcomes were death during hospital admission or within 1 week of discharge to home from the ED, as well as at death within 1 month of the index ED visit. Multivariate logistic regression models were developed in a derivation sample and validated in a validation sample. The score was compared with other published prediction rules for patients with stable COPD. Results In total, 2,487 patients were included in the study. Predictors of death during hospital admission, or within 1 week of discharge to home from the ED were patient age, baseline dyspnea, previous need for long-term home oxygen therapy or non-invasive mechanical ventilation, altered mental status, and use of inspiratory accessory muscles or paradoxical breathing upon ED arrival (area under the curve (AUC) = 0.85). Addition of arterial blood gas parameters (oxygen and carbon dioxide partial pressures (PO2 and PCO2)) and pH) did not improve the model. The same variables were predictors of death at 1 month (AUC = 0.85). Compared with other commonly used tools for predicting the severity of COPD in stable patients, our rule was significantly better
Arnaoutakis, George J; George, Timothy J; Alejo, Diane E; Merlo, Christian A; Baumgartner, William A; Cameron, Duke E; Shah, Ashish S
The impact of Society of Thoracic Surgeons predicted mortality risk score on resource use has not been previously studied. We hypothesize that increasing Society of Thoracic Surgeons risk scores in patients undergoing aortic valve replacement are associated with greater hospital charges. Clinical and financial data for patients undergoing aortic valve replacement at The Johns Hopkins Hospital over a 10-year period (January 2000 to December 2009) were reviewed. The current Society of Thoracic Surgeons formula (v2.61) for in-hospital mortality was used for all patients. After stratification into risk quartiles, index admission hospital charges were compared across risk strata with rank-sum and Kruskal-Wallis tests. Linear regression and Spearman's coefficient assessed correlation and goodness of fit. Multivariable analysis assessed relative contributions of individual variables on overall charges. A total of 553 patients underwent aortic valve replacement during the study period. Average predicted mortality was 2.9% (±3.4) and actual mortality was 3.4% for aortic valve replacement. Median charges were greater in the upper quartile of patients undergoing aortic valve replacement (quartiles 1-3, $39,949 [interquartile range, 32,708-51,323] vs quartile 4, $62,301 [interquartile range, 45,952-97,103], P < .01]. On univariate linear regression, there was a positive correlation between Society of Thoracic Surgeons risk score and log-transformed charges (coefficient, 0.06; 95% confidence interval, 0.05-0.07; P < .01). Spearman's correlation R-value was 0.51. This positive correlation persisted in risk-adjusted multivariable linear regression. Each 1% increase in Society of Thoracic Surgeons risk score was associated with an added $3000 in hospital charges. This is the first study to show that increasing Society of Thoracic Surgeons risk score predicts greater charges after aortic valve replacement. As competing therapies, such as percutaneous valve replacement, emerge to
O'Connor, Rory C; Smyth, Roger; Williams, J Mark G
Although there is clear evidence that low levels of positive future thinking (anticipation of positive experiences in the future) and hopelessness are associated with suicide risk, the relationship between the content of positive future thinking and suicidal behavior has yet to be investigated. This is the first study to determine whether the positive future thinking-suicide attempt relationship varies as a function of the content of the thoughts and whether positive future thinking predicts suicide attempts over time. A total of 388 patients hospitalized following a suicide attempt completed a range of clinical and psychological measures (depression, hopelessness, suicidal ideation, suicidal intent and positive future thinking). Fifteen months later, a nationally linked database was used to determine who had been hospitalized again after a suicide attempt. During follow-up, 25.6% of linked participants were readmitted to hospital following a suicide attempt. In univariate logistic regression analyses, previous suicide attempts, suicidal ideation, hopelessness, and depression-as well as low levels of achievement, low levels of financial positive future thoughts, and high levels of intrapersonal (thoughts about the individual and no one else) positive future thoughts predicted repeat suicide attempts. However, only previous suicide attempts, suicidal ideation, and high levels of intrapersonal positive future thinking were significant predictors in multivariate analyses. Positive future thinking has predictive utility over time; however, the content of the thinking affects the direction and strength of the positive future thinking-suicidal behavior relationship. Future research is required to understand the mechanisms that link high levels of intrapersonal positive future thinking to suicide risk and how intrapersonal thinking should be targeted in treatment interventions. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Narayan, Angela J; Rivera, Luisa M; Bernstein, Rosemary E; Harris, William W; Lieberman, Alicia F
This pilot study examined the psychometric properties of the Benevolent Childhood Experiences (BCEs) scale, a new instrument designed to assess positive early life experiences in adults with histories of childhood maltreatment and other adversities. A counterpart to the Adverse Childhood Experiences (ACEs) questionnaire, the BCEs was developed to be multiculturally-sensitive and applicable regardless of socioeconomic position, urban-rural background, or immigration status. Higher levels of BCEs were hypothesized to predict lower levels of psychopathology and stress beyond the effects of ACES in a sample of ethnically diverse, low-income pregnant women. BCEs were also expected to show adequate internal validity across racial/ethnic groups and test-retest stability from the prenatal to the postnatal period. Participants were 101 pregnant women (M=29.10years, SD=6.56, range=18-44; 37% Latina, 22% African-American, 20% White, 21% biracial/multiracial/other; 37% foreign-born, 26% Spanish-speaking) who completed the BCEs and ACEs scales; assessments of prenatal depression and post-traumatic stress disorder (PTSD) symptoms, perceived stress, and exposure to stressful life events (SLEs) during pregnancy; and demographic information. Higher levels of BCEs predicted less PTSD symptoms and SLEs, above and beyond ACEs. The BCEs showed excellent test-retest reliability, and mean levels were comparable across racial/ethnic and Spanish-English groups of women. Person-oriented analyses also showed that higher levels of BCEs offset the effects of ACEs on prenatal stress and psychopathology. The BCEs scale indexes promising promotive factors associated with lower trauma-related symptomatology and stress exposure during pregnancy and illuminates how favorable childhood experiences may counteract long-term effects of childhood adversity. Copyright © 2017 Elsevier Ltd. All rights reserved.
orofacial injuries.10 These and other efforts have been associated with reduced BCT injuries over time as shown in Figure 111 but injury incidence...to predict first episode of low back pain in Soldiers undergoing combat medic training. Moran et al30 reported an AUG of . 765 for a pragmatic 5...Dugan JL, Robinson ME. Predictors of occurrence and severity of first time low back pain episodes: Findings from a military inception cohort. PLoS
Zhu, K; Lou, Z; Zhou, J; Ballester, N; Kong, N; Parikh, P
This article is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". Hospital readmissions raise healthcare costs and cause significant distress to providers and patients. It is, therefore, of great interest to healthcare organizations to predict what patients are at risk to be readmitted to their hospitals. However, current logistic regression based risk prediction models have limited prediction power when applied to hospital administrative data. Meanwhile, although decision trees and random forests have been applied, they tend to be too complex to understand among the hospital practitioners. Explore the use of conditional logistic regression to increase the prediction accuracy. We analyzed an HCUP statewide inpatient discharge record dataset, which includes patient demographics, clinical and care utilization data from California. We extracted records of heart failure Medicare beneficiaries who had inpatient experience during an 11-month period. We corrected the data imbalance issue with under-sampling. In our study, we first applied standard logistic regression and decision tree to obtain influential variables and derive practically meaning decision rules. We then stratified the original data set accordingly and applied logistic regression on each data stratum. We further explored the effect of interacting variables in the logistic regression modeling. We conducted cross validation to assess the overall prediction performance of conditional logistic regression (CLR) and compared it with standard classification models. The developed CLR models outperformed several standard classification models (e.g., straightforward logistic regression, stepwise logistic regression, random forest, support vector machine). For example, the best CLR model improved the classification accuracy by nearly 20% over the straightforward logistic regression model. Furthermore, the developed CLR models tend to achieve better sensitivity of
Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris
Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have
Fabbian, Fabio; De Giorgi, Alfredo; Maietti, Elisa; Gallerani, Massimo; Pala, Marco; Cappadona, Rosaria; Manfredini, Roberto; Fedeli, Ugo
In-hospital mortality (IHM) is an indicator of the quality of care provided. The two most widely used scores for predicting IHM by International Classification of Diseases (ICD) codes are the Elixhauser (EI) and the Charlson Comorbidity indexes. Our aim was to obtain new measures based on internal medicine ICD codes for the original EI, to detect risk for IHM. This single-center retrospective study included hospital admissions for any cause in the department of internal medicine between January 1, 2000, and December 31, 2013, recorded in the hospital database. The EI was calculated for evaluation of comorbidity, then we added age, gender and diagnosis of ischemic heart disease. IHM was our outcome. Only predictors positively associated with IHM were taken into consideration and the Sullivan's method was applied in order to modify the parameter estimates of the regression model into an index. We analyzed 75,586 admissions (53.4% females) and mean age was 72.7±16.3years. IHM was 7.9% and mean score was 12.1±7.6. The points assigned to each condition ranged from 0 to 16, and the possible range of the score varied between 0 and 89. In our population the score ranged from 0 to 54, and it was higher in the deceased group. Receiver operating characteristic curve of the new score was 0.721 (95% CI 0.714-0.727, pInternal Medicine. Published by Elsevier B.V. All rights reserved.
Ana Clara Guerreiro
Full Text Available ObjectiveTo evaluate the capacity of total anterior thigh thickness, quadriceps muscle thickness, and quadriceps contractile index, all measured by bedside ultrasound, to predict rehospitalization, functional decline, and death in elderly patients 3 months after hospital discharge. To evaluate intra and interobserver reproducibility of the dominant thigh evaluation method by point of care ultrasound.MethodsCohort study of patients aged 65 years or more admitted to a medium complexity unit in a teaching hospital in southern Brazil. Comprehensive geriatric assessment and ultrasound evaluation of the dominant thigh of each participant were performed. After 3 months of hospital discharge, telephone contact was made to evaluate the outcomes of rehospitalization or death and functional decline—assessed by the 100 points Barthel scale and defined as a decrease of five or more points.Results100 participants were included. There was no statistically significant difference between intraobserver measurements in the GEE method analysis (p > 0.05, and the mean bias obtained in Bland–Altman plots was close to zero in all four analyses performed, suggesting good intra and interobserver agreement. There was a significant correlation between the echographic measurements (quadriceps thickness and contractile index and gait speed, timed up and go, and handgrip tests. There was a significant association between contractile index (quadriceps thickness over total anterior thigh thickness multiplied by 100 lower than 60% and functional decline (relative risk 1.35; CI 95% 1.10–1.65; p = 0.003 as well as between the thickness of the quadriceps and rehospitalization or death, in both individuals with preserved walking capacity and in bedridden elders (relative risk 1.34; CI 95% 1.02–1.75; p = 0.04.ConclusionThe ultrasonographic method to evaluate thigh thickness was easily applicable and reproducible. The thickness of the quadriceps could
Pecoraro, Anna; Ewen, Edward; Horton, Terry; Mooney, Ruth; Kolm, Paul; McGraw, Patty; Woody, George
Alcohol withdrawal syndrome (AWS) occurs when alcohol-dependent individuals abruptly reduce or stop drinking. Hospitalized alcohol-dependent patients are at risk. Hospitals need a validated screening tool to assess withdrawal risk, but no validated tools are currently available. To examine the admission Alcohol Use Disorders Identification Test-(Piccinelli) Consumption (AUDIT-PC) ability to predict the subsequent development of AWS among hospitalized medical-surgical patients admitted to a non-intensive care setting. Retrospective case–control study of patients discharged from the hospital with a diagnosis of AWS. All patients with AWS were classified as presenting with AWS or developing AWS later during admission. Patients admitted to an intensive care setting and those missing AUDIT-PC scores were excluded from analysis. A hierarchical (by hospital unit) logistic regression was performed and receiver-operating characteristics were examined on those developing AWS after admission and randomly selected controls. Because those diagnosing AWS were not blinded to the AUDIT-PC scores, a sensitivity analysis was performed. The study cohort included all patients age ≥18 years admitted to any medical or surgical units in a single health care system from 6 October 2009 to 7 October 2010. After exclusions, 414 patients were identified with AWS. The 223 (53.9 %) who developed AWS after admission were compared to 466 randomly selected controls without AWS. An AUDIT-PC score ≥4 at admission provides 91.0 % sensitivity and 89.7 % specificity (AUC=0.95; 95 % CI, 0.94–0.97) for AWS, and maximizes the correct classification while resulting in 17 false positives for every true positive identified. Performance remained excellent on sensitivity analysis (AUC=0.92; 95 % CI, 0.90–0.93). Increasing AUDIT-PC scores were associated with an increased risk of AWS (OR=1.68, 95 % CI 1.55–1.82, pAUDIT-PC score is an excellent discriminator of AWS and could be an important component
Deterioration of physiological or laboratory variables may provide important prognostic information. We have studied whether a change in estimated glomerular filtration rate (eGFR) value calculated using the (Modification of Diet in Renal Disease (MDRD) formula) over the hospital admission, would have predictive value. An analysis was performed on all emergency medical hospital episodes (N = 61964) admitted between 1 January 2002 and 31 December 2011. A stepwise logistic regression model examined the relationship between mortality and change in renal function from admission to discharge. The fully adjusted Odds Ratios (OR) for 5 classes of GFR deterioration showed a stepwise increased risk of 30-day death with OR\\'s of 1.42 (95% CI: 1.20, 1.68), 1.59 (1.27, 1.99), 2.71 (2.24, 3.27), 5.56 (4.54, 6.81) and 11.9 (9.0, 15.6) respectively. The change in eGFR during a clinical episode, following an emergency medical admission, powerfully predicts the outcome.
Becker, Daniel F; Grilo, Carlos M
To examine psychological correlates of suicidality and violent behaviour in hospitalized adolescents and the extent to which these associations may be affected by their sex. A sample of 487 psychiatric inpatients (207 male, 280 female), aged 12 to 19 years, completed a battery of psychometrically sound self-report measures of psychological functioning, substance abuse, suicidality, and violent behaviour. We conducted multiple regression analyses to determine the joint and independent predictors of suicide risk and violence risk. In subsequent analyses, we examined these associations separately by sex. Multiple regression analysis revealed that 9 variables (sex, age, hopelessness, self-esteem, depression, impulsivity, alcohol abuse, drug abuse, and violence risk) jointly predicted suicide risk and that an analogous model predicted violence risk. However, we found several differences with respect to which variables made significant independent contributions to these 2 predictive models. Female sex, low self-esteem, depression, drug abuse, and violence risk made independent contributions to suicide risk. Male sex, younger age, hopelessness, impulsivity, drug abuse, and suicide risk made independent contributions to violence risk. We observed a few additional differences when we considered male and female subjects separately. We found overlapping but distinctive patterns of prediction for suicide risk and violence risk, as well as some differences between male and female subjects. These results may reflect distinct psychological and behavioural pathways for suicidality and violence in adolescent psychiatric patients and differing risk factors for each sex. Such differences have potential implications for prevention and treatment programs.
Full Text Available BACKGROUND: Currently, researchers seek to identify factors related to length of hospital stay in elderly in order to reduce burden on the health system. The importance of either physiological or psychological factors in determining health outcomes has been well stablished; however, the possible contribution of psychosocial factors particularly in elderly patients with diabetes is also of special importance. This study aimed to know what psychosocial variables predicts length of hospital stay in elderly patients with diabetes. METHODS: This was a cross-sectional, correlational study conducted on 150 elderly patients from July-October 2015. Convenient sampling method was used to recruit the subjects. The data was collected by a three-part questionnaire consisted of demographic and health related characteristics, 21-item depression anxiety stress scale (DASS-21 and multidimensional scale of perceived social support (MSPSS. RESULTS: The mean ± standard deviation of length of hospital stay was 15.6 ± 7.7 days. Findings from multiple regression analysis showed that the models of predicting length of hospital stay in subgroups of both women (P = 0.001, F6,77 = 4.45 and men (P = 0.03, F6,71 = 2.43 were significant. The entered variables in subgroups of women and men accounted for 27% and 18% of total variance (R2 of the length of hospital stay, respectively. None of the psychosocial variables in women significantly predicted the lengths of hospital stay. However, one out of three predicting psychosocial variables (i.e. stress in men significantly predicted the length of hospital stay (β = 0.39, t = 2.1, P = 0.04. CONCLUSION: The results emphasized the importance of promoting social support of elderly patients with diabetes, particularly in patients who are women, have higher levels of stress, have higher period of disease and a history of hospitalization in the past 6 months in order to lower length of hospital stay and finally promote health status
The challenge could be briefly seen in these terms: hospitals as places for treatment where there’s a technology focus and hospitals for healing where there’s a human focus. In the 60s - 70s wave of new hospital building, an emphasis on technology can be seen. It’s time to move from the technology...... focus. It is not enough to consider only the factors of function within architecture, hygiene, economy and logistics. We also need to look at aspects of aesthetics, bringing nature into the building, art, color, acoustics, volume and space as we perceive them. Contemporary methods and advances...... placed, accessible, provided with plenty of greenery, and maximize sensory impressions, providing sounds, smells, sight and the possibility to be touched. This is a very well documented area I can say. Hygiene, in terms of architecture can give attention to hand wash facilities and their positioning...
Mohammed, Mohammed A; Rudge, Gavin; Wood, Gordon; Smith, Gary; Nangalia, Vishal; Prytherch, David; Holder, Roger; Briggs, Jim
Routine blood tests are an integral part of clinical medicine and in interpreting blood test results clinicians have two broad options. (1) Dichotomise the blood tests into normal/abnormal or (2) use the actual values and overlook the reference values. We refer to these as the "binary" and the "non-binary" strategy respectively. We investigate which strategy is better at predicting the risk of death in hospital based on seven routinely undertaken blood tests (albumin, creatinine, haemoglobin, potassium, sodium, urea, and white blood cell count) using tree models to implement the two strategies. A retrospective database study of emergency admissions to an acute hospital during April 2009 to March 2010, involving 10,050 emergency admissions with routine blood tests undertaken within 24 hours of admission. We compared the area under the Receiver Operating Characteristics (ROC) curve for predicting in-hospital mortality using the binary and non-binary strategy. The mortality rate was 6.98% (701/10050). The mean predicted risk of death in those who died was significantly (p-value non-binary strategy (risk = 0.222 95%CI: 0.194 to 0.251), representing a risk difference of 28.74 deaths in the deceased patients (n = 701). The binary strategy had a significantly (p-value non-binary strategy (0.853 95% CI: 0.840 to 0.867). Similar results were obtained using data from another hospital. Dichotomising routine blood test results is less accurate in predicting in-hospital mortality than using actual test values because it underestimates the risk of death in patients who died. Further research into the use of actual blood test values in clinical decision making is required especially as the infrastructure to implement this potentially promising strategy already exists in most hospitals.
Mohammed, Mohammed A.; Rudge, Gavin; Wood, Gordon; Smith, Gary; Nangalia, Vishal; Prytherch, David; Holder, Roger; Briggs, Jim
Background Routine blood tests are an integral part of clinical medicine and in interpreting blood test results clinicians have two broad options. (1) Dichotomise the blood tests into normal/abnormal or (2) use the actual values and overlook the reference values. We refer to these as the “binary” and the “non-binary” strategy respectively. We investigate which strategy is better at predicting the risk of death in hospital based on seven routinely undertaken blood tests (albumin, creatinine, haemoglobin, potassium, sodium, urea, and white blood cell count) using tree models to implement the two strategies. Methodology A retrospective database study of emergency admissions to an acute hospital during April 2009 to March 2010, involving 10,050 emergency admissions with routine blood tests undertaken within 24 hours of admission. We compared the area under the Receiver Operating Characteristics (ROC) curve for predicting in-hospital mortality using the binary and non-binary strategy. Results The mortality rate was 6.98% (701/10050). The mean predicted risk of death in those who died was significantly (p-value non-binary strategy (risk = 0.222 95%CI: 0.194 to 0.251), representing a risk difference of 28.74 deaths in the deceased patients (n = 701). The binary strategy had a significantly (p-value non-binary strategy (0.853 95% CI: 0.840 to 0.867). Similar results were obtained using data from another hospital. Conclusions Dichotomising routine blood test results is less accurate in predicting in-hospital mortality than using actual test values because it underestimates the risk of death in patients who died. Further research into the use of actual blood test values in clinical decision making is required especially as the infrastructure to implement this potentially promising strategy already exists in most hospitals. PMID:23077528
Moonis Mirza; Farooq A. Jan; Rauf Ahmad Wani; Fayaz Ahmad Sofi
Background: A good quality report should lend itself for detailed analysis of the chain of events that lead to the incident. This knowledge can then be used to consider what interventions, and at what level in the chain, can prevent the incident from occurring again. Aim was to study the occurrence of adverse events on the basis of incident reporting. Methods: Critical analysis of incident reporting of adverse events taking place in admitted patients for one year by using WHO Structured q...
Few case-finding instruments are available to community healthcare professionals. This review aims to identify short, valid instruments that detect older community-dwellers risk of four adverse outcomes: hospitalisation, functional-decline, institutionalisation and death. Data sources included PubMed and the Cochrane library. Data on outcome measures, patient and instrument characteristics, and trial quality (using the Quality In Prognosis Studies [QUIPS] tool), were double-extracted for derivation-validation studies in community-dwelling older adults (>50 years). Forty-six publications, representing 23 unique instruments, were included. Only five were externally validated. Mean patient age range was 64.2-84.6 years. Most instruments n=18, (78%) were derived in North America from secondary analysis of survey data. The majority n=12, (52%), measured more than one outcome with hospitalisation and the Probability of Repeated Admission score the most studied outcome and instrument respectively. All instruments incorporated multiple predictors. Activities of daily living n=16, (70%), was included most often. Accuracy varied according to instruments and outcomes; area under the curve of 0.60-0.73 for hospitalisation, 0.63-0.78 for functional decline, 0.70-0.74 for institutionalisation and 0.56-0.82 for death. The QUIPS tool showed that 5\\/23 instruments had low potential for bias across all domains. This review highlights the present need to develop short, reliable, valid instruments to case-find older adults at risk in the community.
Jabir, Rafid Salim; Ho, Gwo Fuang; Annuar, Muhammad Azrif Bin Ahmad; Stanslas, Johnson
Rash and oral mucositis are major non-haematological adverse events (AEs) of docetaxel, in addition to fatigue, nausea, vomiting and diarrhoea, which restrict the use of the drug in cancer therapy. Alpha-1-acid glycoprotein (AAG) is an acute phase reactant glycoprotein and is a primary carrier of docetaxel in the blood. Docetaxel has extensive binding (>98%) to plasma proteins such as AAG, lipoproteins and albumin. To study the association between plasma AAG level and non-haematological AEs of docetaxel in Malaysian breast cancer patients of three major ethnic groups (Malays, Chinese and Indians). One hundred and twenty Malaysian breast cancer patients receiving docetaxel as single agent chemotherapy were investigated for AAG plasma level using enzyme-linked immunosorbent assay technique. Toxicity assessment was determined using Common Terminology Criteria of Adverse Events v4.0. The association between AAG and toxicity were then established. There was interethnic variation of plasma AAG level; it was 182 ± 85 mg/dl in Chinese, 237 ± 94 mg/dl in Malays and 240 ± 83 mg/dl in Indians. It was found that low plasma levels of AAG were significantly associated with oral mucositis and rash. This study proposes plasma AAG as a potential predictive biomarker of docetaxel non-haematological AEs namely oral mucositis and rash.
Frid, Anna A; Matthews, Edwin J
This report describes the use of three quantitative structure-activity relationship (QSAR) programs to predict drug-related cardiac adverse effects (AEs), BioEpisteme, MC4PC, and Leadscope Predictive Data Miner. QSAR models were constructed for 9 cardiac AE clusters affecting Purkinje nerve fibers (arrhythmia, bradycardia, conduction disorder, electrocardiogram, palpitations, QT prolongation, rate rhythm composite, tachycardia, and Torsades de pointes) and 5 clusters affecting the heart muscle (coronary artery disorders, heart failure, myocardial disorders, myocardial infarction, and valve disorders). The models were based on a database of post-marketing AEs linked to 1632 chemical structures, and identical training data sets were configured for three QSAR programs. Model performance was optimized and shown to be affected by the ratio of the number of active to inactive drugs. Results revealed that the three programs were complementary and predictive performances using any single positive, consensus two positives, or consensus three positives were as follows, respectively: 70.7%, 91.7%, and 98.0% specificity; 74.7%, 47.2%, and 21.0% sensitivity; and 138.2, 206.3, and 144.2 chi(2). In addition, a prospective study using AE data from the U.S. Food and Drug Administration's (FDA's) MedWatch Program showed 82.4% specificity and 94.3% sensitivity. Furthermore, an external validation study of 18 drugs with serious cardiotoxicity not considered in the models had 88.9% sensitivity. Published by Elsevier Inc.
Jeon, Tae Joo; Park, Ji Young
To investigated the prognostic value of the neutrophil-lymphocyte ratio (NLR) in patients with acute pancreatitis and determined an optimal cut-off value for the prediction of adverse outcomes in these patients. We retrospectively analyzed 490 patients with acute pancreatitis diagnosed between March 2007 and December 2012. NLRs were calculated at admission and 24, 48, and 72 h after admission. Patients were grouped according to acute pancreatitis severity and organ failure occurrence, and a comparative analysis was performed to compare the NLR between groups. Among the 490 patients, 70 had severe acute pancreatitis with 31 experiencing organ failure. The severe acute pancreatitis group had a significantly higher NLR than the mild acute pancreatitis group on all 4 d (median, 6.14, 6.71, 5.70, and 4.00 vs 4.74, 4.47, 3.20, and 3.30, respectively, P pancreatitis. Elevated baseline NLR correlates with severe acute pancreatitis and organ failure.
Hars, Mélany; Audet, Marie-Claude; Herrmann, François; De Chassey, Jean; Rizzoli, René; Reny, Jean-Luc; Gold, Gabriel; Ferrari, Serge; Trombetti, Andrea
Falls are common among older inpatients and remain a great challenge for hospitals. Despite the relevance of physical impairments to falls, the prognostic value of performance-based functional measures for in-hospital falls and injurious falls remains unknown. This study aimed to determine the predictive ability and accuracy of various functional tests administered at or close to admission in a geriatric hospital to identify in-hospital fallers and injurious fallers. In this prospective study, conducted in a geriatric hospital in Geneva, Switzerland, 807 inpatients (mean age 85.0 years) were subjected to a battery of functional tests administered by physiotherapists within 3 days (interquartile range 1 to 6) of admission, including Short Physical Performance Battery (SPPB), simplified Tinetti, and Timed Up and Go tests. Patients were prospectively followed up for falls and injurious falls until discharge using mandatory standardized incident report forms and electronic patients' records. During a median length of hospital stay of 23 days (interquartile range 14 to 36), 329 falls occurred in 189 (23.4%) patients, including 161 injurious falls of which 24 were serious. In-hospital fallers displayed significantly poorer functional performances at admission on all tests compared with non-fallers (p performances on all functional tests predicted in-hospital falls and injurious falls (p poor functional performances, as assessed by SPPB, are independent predictors of in-hospital falls, injurious falls, and fractures in patients admitted to a geriatric hospital. These findings should help to design preventive strategies for in-hospital falls and support the adoption of objective performance-based functional measures into routine hospital practice. © 2018 American Society for Bone and Mineral Research. © 2018 American Society for Bone and Mineral Research.
Pei-Fang (Jennifer Tsai
Full Text Available For hospitals’ admission management, the ability to predict length of stay (LOS as early as in the preadmission stage might be helpful to monitor the quality of inpatient care. This study is to develop artificial neural network (ANN models to predict LOS for inpatients with one of the three primary diagnoses: coronary atherosclerosis (CAS, heart failure (HF, and acute myocardial infarction (AMI in a cardiovascular unit in a Christian hospital in Taipei, Taiwan. A total of 2,377 cardiology patients discharged between October 1, 2010, and December 31, 2011, were analyzed. Using ANN or linear regression model was able to predict correctly for 88.07% to 89.95% CAS patients at the predischarge stage and for 88.31% to 91.53% at the preadmission stage. For AMI or HF patients, the accuracy ranged from 64.12% to 66.78% at the predischarge stage and 63.69% to 67.47% at the preadmission stage when a tolerance of 2 days was allowed.
Williams, Wright; Kunik, Mark E; Springer, Justin; Graham, David P
To examine which personality traits are associated with the new onset of chronic coronary heart disease (CHD) in psychiatric inpatients within 16 years after their initial evaluation. We theorized that personality measures of depression, anxiety, hostility, social isolation, and substance abuse would predict CHD development in psychiatric inpatients. We used a longitudinal database of psychological test data from 349 Veterans first admitted to a psychiatric unit between October 1, 1983, and September 30, 1987. Veterans Affairs and national databases were assessed to determine the development of new-onset chronic CHD over the intervening 16-year period. New-onset CHD developed in 154 of the 349 (44.1%) subjects. Thirty-one psychometric variables from five personality tests significantly predicted the development of CHD. We performed a factor analysis of these variables because they overlapped and four factors emerged, with positive adaptive functioning the only significant factor (OR=0.798, p=0.038). These results support previous research linking personality traits to the development of CHD, extending this association to a population of psychiatric inpatients. Compilation of these personality measures showed that 31 overlapping psychometric variables predicted those Veterans who developed a diagnosis of heart disease within 16 years after their initial psychiatric hospitalization. Our results suggest that personality variables measuring positive adaptive functioning are associated with a reduced risk of developing chronic CHD.
Matthews, Edwin J; Frid, Anna A
This is the first of two reports that describes the compilation of a database of drug-related cardiac adverse effects (AEs) that was used to construct quantitative structure-activity relationship (QSAR) models to predict these AEs, to identify properties of pharmaceuticals correlated with the AEs, and to identify plausible mechanisms of action (MOAs) causing the AEs. This database of 396,985 cardiac AE reports was linked to 1632 approved drugs and their chemical structures, 1851 clinical indications (CIs), 997 therapeutic targets (TTs), 432 pharmacological MOAs, and 21,180 affinity coefficients (ACs) for the MOA receptors. AEs were obtained from the Food and Drug Administration's (FDA's) Spontaneous Reporting System (SRS) and Adverse Event Reporting System (AERS) and publicly available medical literature. Drug TTs were obtained from Integrity; drug MOAs and ACs were predicted by BioEpisteme. Significant cardiac AEs and patient exposures were estimated based on the proportional reporting ratios (PRRs) for each drug and each AE endpoint as a percentage of the total AEs. Cardiac AE endpoints were bundled based on toxicological mechanism and concordance of drug-related findings. Results revealed that significant cardiac AEs formed 9 clusters affecting Purkinje nerve fibers (arrhythmia, bradycardia, conduction disorder, electrocardiogram, palpitations, QT prolongation, rate rhythm composite, tachycardia, and Torsades de pointes), and 5 clusters affecting the heart muscle (coronary artery disorders, heart failure, myocardial disorders, myocardial infarction, and valve disorders). Based on the observation that each drug had one TT and up to 9 off-target MOAs, cardiac AEs were highly correlated with drugs affecting cardiovascular and cardioneurological functions and certain MOAs (e.g., alpha- and beta-adeno, dopamine, and hydroxytryptomine receptors). Copyright 2010. Published by Elsevier Inc.
Nilsen, Wendy; Skipstein, Anni; Demerouti, Evangelia
The long-term consequence of experiencing mental health problems may lead to several adverse outcomes. The current study aims to validate previous identified trajectories of mental health problems from 1993 to 2006 in women by examining their implications on subsequent work and family-related outcomes in 2011. Employed women (n = 439) with children were drawn from the Tracking Opportunities and Problems-Study (TOPP), a community-based longitudinal study following Norwegian families across 18 years. Previous identified latent profiles of mental health trajectories (i.e., High; Moderate; Low-rising and Low levels of mental health problems over time) measured at six time points between 1993 and 2006 were examined as predictors of burnout (e.g., exhaustion and disengagement from work) and work-family conflict in 2011 in univariate and multivariate analyses of variance adjusted for potential confounders (age, job demands, and negative emotionality). We found that having consistently High and Moderate symptoms as well as Low-Rising symptoms from 1993 to 2006 predicted higher levels of exhaustion, disengagement from work and work-family conflict in 2011. Findings remained unchanged when adjusting for several potential confounders, but when adjusting for current mental health problems only levels of exhaustion were predicted by the mental health trajectories. The study expands upon previous studies on the field by using a longer time span and by focusing on employed women with children who experience different patterns of mental health trajectories. The long-term effect of these trajectories highlight and validate the importance of early identification and prevention in women experiencing adverse patterns of mental health problems with regards to subsequent work and family-related outcomes.
Jochems, Arthur; Deist, Timo M; van Soest, Johan; Eble, Michael; Bulens, Paul; Coucke, Philippe; Dries, Wim; Lambin, Philippe; Dekker, Andre
One of the major hurdles in enabling personalized medicine is obtaining sufficient patient data to feed into predictive models. Combining data originating from multiple hospitals is difficult because of ethical, legal, political, and administrative barriers associated with data sharing. In order to avoid these issues, a distributed learning approach can be used. Distributed learning is defined as learning from data without the data leaving the hospital. Clinical data from 287 lung cancer patients, treated with curative intent with chemoradiation (CRT) or radiotherapy (RT) alone were collected from and stored in 5 different medical institutes (123 patients at MAASTRO (Netherlands, Dutch), 24 at Jessa (Belgium, Dutch), 34 at Liege (Belgium, Dutch and French), 48 at Aachen (Germany, German) and 58 at Eindhoven (Netherlands, Dutch)). A Bayesian network model is adapted for distributed learning (watch the animation: http://youtu.be/nQpqMIuHyOk). The model predicts dyspnea, which is a common side effect after radiotherapy treatment of lung cancer. We show that it is possible to use the distributed learning approach to train a Bayesian network model on patient data originating from multiple hospitals without these data leaving the individual hospital. The AUC of the model is 0.61 (95%CI, 0.51-0.70) on a 5-fold cross-validation and ranges from 0.59 to 0.71 on external validation sets. Distributed learning can allow the learning of predictive models on data originating from multiple hospitals while avoiding many of the data sharing barriers. Furthermore, the distributed learning approach can be used to extract and employ knowledge from routine patient data from multiple hospitals while being compliant to the various national and European privacy laws. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Sadeghian, Afsaneh; Damghanian, Maryam; Shariati, Mohammad
Current study determined the overall incidence, common causes as well as main predictors of this final diagnosis among neonates admitted to a rural district hospital in Iran. This study was conducted on 699 neonates who were candidate for admission to the NICU. Study population was categorized in the case group, including patients exposed to final diagnosis of neonatal seizures and the control group without this diagnosis. Neonatal seizure was reported as final diagnosis in 25 (3.6%) of neonates. The most frequent discharge diagnosis in the seizure group was neonatal sepsis and in the non-seizure group was respiratory problems. No significant difference was found in early fatality rate between neonates with and without seizures (8.0% vs. 10.1%). Only gestational age <38 week had a relationship with the appearance of neonatal seizure. Low gestational age has a crucial role for predicting appearance of seizure in Iranian neonates.
Adel Alinezhad Kolaei
Full Text Available Introduction:APACHE (Acute Physiologic and Chronic Health Evaluation score is a medical tool designed to measure the severity of disease for adult patients admitted to Intensive Care Units (ICU. However, it is designed based on the American patients’ data and is not well suited to be used for Iranian people. In addition, Iranian hospitals are not equipped with High Dependency Units which is required for original APACHE. Method: We aimed to design an intelligent version of APACHE system for recognition of patients’ hospitalization period in ICUs. The new system can be designed based on Iranian local data and updated locally. Intelligence means that the system has the ability to learn from its previous results and doesn’t need manual update. Results: In this study, this new system is introduced and the technical specifications are presented. It is based on neural networks. It can be trained and is capable of auto-learning. The results obtained from final implemented software show better performance than those obtained from non-local version. Conclusion: Using this method, the efficiency of the prediction has increased from 80% to 90%. Such results were compared with the APACHE outputs to show the superiority of the proposed method.
Verbeke, Frank; Karara, Gustave; Nyssen, Marc
From 2007 through 2014, the authors participated in the implementation of open source hospital information systems (HIS) in 19 hospitals in Rwanda, Burundi, DR Congo, Congo-Brazzaville, Gabon, and Mali. Most of these implementations were successful, but some failed. At the end of a seven-year implementation effort, a number of risk factors, facilitators, and pragmatic approaches related to the deployment of HIS in Sub-Saharan health facilities have been identified. Many of the problems encountered during the HIS implementation process were not related to technical issues but human, cultural, and environmental factors. This study retrospectively evaluates the predictive value of 14 project failure factors and 15 success factors in HIS implementation in the Sub-Saharan region. Nine of the failure factors were strongly correlated with project failure, three were moderately correlated, and one weakly correlated. Regression analysis also confirms that eight factors were strongly correlated with project success, four moderately correlated, and two weakly correlated. The study results may help estimate the expedience of future HIS projects.
Yajnik, Chittaranjan S; Katre, Prachi A; Joshi, Suyog M; Kumaran, Kalyanaraman; Bhat, Dattatray S; Lubree, Himangi G; Memane, Nilam; Kinare, Arun S; Pandit, Anand N; Bhave, Sheila A; Bavdekar, Ashish; Fall, Caroline H D
The Pune Children's Study aimed to test whether glucose and insulin measurements in childhood predict cardiovascular risk factors in young adulthood. We followed up 357 participants (75% follow-up) at 21 years of age who had undergone detailed measurements at 8 years of age (glucose, insulin, HOMA-IR and other indices). Oral glucose tolerance, anthropometry, plasma lipids, BP, carotid intima-media thickness (IMT) and arterial pulse wave velocity (PWV) were measured at 21 years. Higher fasting glucose, insulin and HOMA-IR at 8 years predicted higher glucose, insulin, HOMA-IR, BP, lipids and IMT at 21 years. A 1 SD change in 8 year variables was associated with a 0.10-0.27 SD change at 21 years independently of obesity/adiposity at 8 years of age. A greater rise in glucose-insulin variables between 8 and 21 years was associated with higher cardiovascular risk factors, including PWV. Participants whose HOMA-IR measurement remained in the highest quartile (n = 31) had a more adverse cardiovascular risk profile compared with those whose HOMA-IR measurement remained in the lowest quartile (n = 28). Prepubertal glucose-insulin metabolism is associated with adult cardiovascular risk and markers of atherosclerosis. Our results support interventions to improve glucose-insulin metabolism in childhood to reduce cardiovascular risk in later life.
The Adverse Outcome Pathway (AOP) framework is increasingly being adopted as a tool for organizing and summarizing the mechanistic information connecting molecular perturbations by environmental stressors with adverse outcomes relevant for ecological and human health outcomes. Ho...
The Adverse Outcome Pathway (AOP) framework is becoming a widely used tool for organizing and summarizing the mechanistic information connecting molecular perturbations by environmental stressors with adverse ecological and human health outcomes. However, the conventional process...
Kim, Hyosun; Jo, Sion; Lee, Jae Baek; Jin, Youngho; Jeong, Taeoh; Yoon, Jaechol; Lee, Jeong Moon; Park, Boyoung
The predictive value of serum albumin in adult aspiration pneumonia patients remains unknown. Using data collected during a 3-year retrospective cohort of hospitalized adult patients with aspiration pneumonia, we evaluated the predictive value of serum albumin level at ED presentation for in-hospital mortality. 248 Patients were enrolled; of these, 51 cases died (20.6%). The mean serum albumin level was 3.4±0.7g/dL and serum albumin levels were significantly lower in the non-survivor group than in the survivor group (3.0±0.6g/dL vs. 3.5±0.6g/dL). In the multivariable logistic regression model, albumin was associated with in-hospital mortality significantly (adjusted odds ratio 0.30, 95% confidential interval (CI) 0.16-0.57). The area under the receiver operating characteristics (AUROC) for in-hospital survival was 0.72 (95% CI 0.64-0.80). The Youden index was 3.2g/dL and corresponding sensitivity, specificity, positive predictive value, negative predictive value, positive and negative likelihood ratio were 68.6%, 66.5%, 34.7%, 89.1%, 2.05 and 0.47, respectively. High sensitivity (98.0%) was shown at albumin level of 4.0g/dL and high specificity (94.9%) was shown at level of 2.5g/dL. Initial serum albumin levels were independently associated with in-hospital mortality among adult patients hospitalized with aspiration pneumonia and demonstrated fair discriminative performance in the prediction of in-hospital mortality. Copyright © 2017 Elsevier Inc. All rights reserved.
Tcheng, James E; Lim, Ing Haan; Srinivasan, Shankar; Jozic, Joseph; Gibson, C Michael; O'Shea, J Conor; Puma, Joseph A; Simon, Daniel I
Only limited data describe relationships between stent parameters (length and diameter), adverse events after percutaneous coronary intervention, and effects of platelet glycoprotein IIb/IIIa blockade by stent parameters. In this post hoc analysis of the 1983 patients receiving a stent in the Enhanced Suppression of the Platelet Glycoprotein IIb/IIIa Receptor with Integrilin Therapy randomized percutaneous coronary intervention trial of eptifibatide versus placebo, rates of the major adverse cardiac event (MACE) end point (death, myocardial infarction, urgent target-vessel revascularization, or thrombotic bailout) at 48 hours and 1 year were correlated with stent parameters and then analyzed by randomization to eptifibatide versus placebo. In the placebo group, MACE increased with number of stents implanted, total stent length (by quartiles of or=30 mm), and total stented vessel area (by quartiles of area or=292 mm(2)). By stent parameters, MACE at 48 hours was reduced in the eptifibatide group at stent lengths of 18 to or=30 mm (OR, 0.43; 95% CI, 0.25 to 0.75; P=0.003), stent diameters of >2.5 to <3.5 mm (OR, 0.56; 95% CI, 0.39 to 0.82; P=0.002), and with 2 stents implanted (OR, 0.39; 95% CI, 0.22 to 0.69; P=0.001). In the placebo group, near-linear relationships were observed between both increasing stent length and increasing stented vessel area and MACE at 48 hours and 1 year (all, P<0.001); these gradients were flattened in the eptifibatide group (P=0.005 for stent length). Stent parameters predict MACE after percutaneous coronary intervention. Glycoprotein IIb/IIIa blockade mitigates much of the hazard of increasing procedural complexity.
Cristina K. Weber
Full Text Available OBJECTIVES: Proper assessment of dyspnea is important in patients with heart failure. Our aim was to evaluate the use of the 5-point Likert scale for dyspnea to assess the degree of pulmonary congestion and to determine the prognostic value of this scale for predicting adverse events in heart failure outpatients. METHODS: We undertook a prospective study of outpatients with moderate to severe heart failure. The 5-point Likert scale was applied during regular outpatient visits, along with clinical assessments. Lung ultrasound with ≥15 B-lines and an amino-terminal portion of pro-B-type natriuretic peptide (NT-proBNP level >1000 pg/mL were used as a reference for pulmonary congestion. The patients were then assessed every 30 days during follow-up to identify adverse clinical outcomes. RESULTS: We included 58 patients (65.5% male, age 43.5±11 years with a mean left ventricular ejection fraction of 27±6%. In total, 29.3% of these patients had heart failure with ischemic etiology. Additionally, pulmonary congestion, as diagnosed by lung ultrasound, was present in 58% of patients. A higher degree of dyspnea (3 or 4 points on the 5-point Likert scale was significantly correlated with a higher number of B-lines (p = 0.016. Patients stratified into Likert = 3-4 were at increased risk of admission compared with those in class 1-2 after adjusting for age, left ventricular ejection fraction, New York Heart Association functional class and levels of NT-proBNP >1000 pg/mL (HR = 4.9, 95% CI 1.33-18.64, p = 0.017. CONCLUSION: In our series, higher baseline scores on the 5-point Likert scale were related to pulmonary congestion and were independently associated with adverse events during follow-up. This simple clinical tool can help to identify patients who are more likely to decompensate and whose treatment should be intensified.
Khanna, Raman R; Kim, Sharon B; Jenkins, Ian; El-Kareh, Robert; Afsarmanesh, Nasim; Amin, Alpesh; Sand, Heather; Auerbach, Andrew; Chia, Catherine Y; Maynard, Gregory; Romano, Patrick S; White, Richard H
Hospital-acquired venous thromboembolic (HA-VTE) events are an important, preventable cause of morbidity and death, but accurately identifying HA-VTE events requires labor-intensive chart review. Administrative diagnosis codes and their associated "present-on-admission" (POA) indicator might allow automated identification of HA-VTE events, but only if VTE codes are accurately flagged "not present-on-admission" (POA=N). New codes were introduced in 2009 to improve accuracy. We identified all medical patients with at least 1 VTE "other" discharge diagnosis code from 5 academic medical centers over a 24-month period. We then sampled, within each center, patients with VTE codes flagged POA=N or POA=U (insufficient documentation) and POA=Y or POA=W (timing clinically uncertain) and abstracted each chart to clarify VTE timing. All events that were not clearly POA were classified as HA-VTE. We then calculated predictive values of the POA=N/U flags for HA-VTE and the POA=Y/W flags for non-HA-VTE. Among 2070 cases with at least 1 "other" VTE code, we found 339 codes flagged POA=N/U and 1941 flagged POA=Y/W. Among 275 POA=N/U abstracted codes, 75.6% (95% CI, 70.1%-80.6%) were HA-VTE; among 291 POA=Y/W abstracted events, 73.5% (95% CI, 68.0%-78.5%) were non-HA-VTE. Extrapolating from this sample, we estimated that 59% of actual HA-VTE codes were incorrectly flagged POA=Y/W. POA indicator predictive values did not improve after new codes were introduced in 2009. The predictive value of VTE events flagged POA=N/U for HA-VTE was 75%. However, sole reliance on this flag may substantially underestimate the incidence of HA-VTE.
Full Text Available The objective of this study was to determine if organisational culture predicts turnover intentions of professional nurses. A predictive model with organisational culture and various proposed mediating variables, namely knowledge sharing, organisational commitment, organisational citisenship behaviour and job satisfaction, as well as various demographic variables was developed to determine turnover intentions through applying General Linear Modelling. A correlational design with questionnaires was used. A sample of professional nurses (N = 530 in private and provincial hospitals was obtained. The results indicate that organisational culture has a significantly negative correlation with turnover intentions. Organisational culture also interacted with job satisfaction, knowledge sharing, and the white professional nurses’ category to decrease turnover intentions and with Organisational Citisen Behaviours to increase turnover intentions in a final predictive model. It is therefore recommended that nursing employers seriously embark on strategies to improve the organisational culture to retain their talent. Opsomming Die doel van die studie was om te bepaal of organisasie-kultuur arbeidsomset-voornemens van professionele verpleegkundiges voorspel. ‘n Voorspellingsmodel met organisasiekultuur en verskeie voorgestelde tussenkomende veranderlikes, naamlik kennisdeling, organisasieverbintenis, organisasie-burgerskapsgedrag en werkstevredenheid, asook verskeie demografiese veranderlikes was ontwikkel deur Algemene Liniêre Modellering. ’n Korrelasie-ontwerp met behulp van vraelyste is gebruik. ‘n Steekproef van professionele verpleegkundiges (N = 530 in private en provinsiale hospitale is verkry. Die resultate toon dat organisasie-kultuur betekenisvol negatief korrelleer met arbeidsomsetvoornemens. Organisasiekultuur het ook in interaksie met werkstevredenheid en kennisdeling, asook die kategorie blanke verpleegkundiges in ‘n finale model
Wada, Tomoki; Yasunaga, Hideo; Yamana, Hayato; Matsui, Hiroki; Fushimi, Kiyohide; Morimura, Naoto
There was no established disability predictive measurement for patients with trauma that could be used in administrative claims databases. The aim of the present study was to develop and validate a diagnosis-based disability predictive index for severe physical disability at discharge using the International Classification of Diseases, 10th revision (ICD-10) coding. This retrospective observational study used the Diagnosis Procedure Combination database in Japan. Patients who were admitted to hospitals with trauma and discharged alive from 01 April 2010 to 31 March 2015 were included. Pediatric patients under 15 years old were excluded. Data for patients admitted to hospitals from 01 April 2010 to 31 March 2013 was used for development of a disability predictive index (derivation cohort), while data for patients admitted to hospitals from 01 April 2013 to 31 March 2015 was used for the internal validation (validation cohort). The outcome of interest was severe physical disability defined as the Barthel Index score of predictive index for each patient was defined as the sum of the scores. The predictive performance of the index was validated using the receiver operating characteristic curve analysis in the validation cohort. The derivation cohort included 1,475,158 patients, while the validation cohort included 939,659 patients. Of the 939,659 patients, 235,382 (25.0%) were discharged with severe physical disability. The c-statistics of the disability predictive index was 0.795 (95% confidence interval [CI] 0.794-0.795), while that of a model using the disability predictive index and patient baseline characteristics was 0.856 (95% CI 0.855-0.857). Severe physical disability at discharge may be well predicted with patient age, sex, CCI score, and the diagnosis-based disability predictive index in patients admitted to hospitals with trauma. Copyright © 2018 Elsevier Ltd. All rights reserved.
Aparecida D. P. Souza
Full Text Available A Bayesian binary regression model is developed to predict death of patients after acute myocardial infarction (AMI. Markov Chain Monte Carlo (MCMC methods are used to make inference and to evaluate Bayesian binary regression models. A model building strategy based on Bayes factor is proposed and aspects of model validation are extensively discussed in the paper, including the posterior distribution for the c-index and the analysis of residuals. Risk assessment, based on variables easily available within minutes of the patients' arrival at the hospital, is very important to decide the course of the treatment. The identified model reveals itself strongly reliable and accurate, with a rate of correct classification of 88% and a concordance index of 83%.Um modelo bayesiano de regressão binária é desenvolvido para predizer óbito hospitalar em pacientes acometidos por infarto agudo do miocárdio. Métodos de Monte Carlo via Cadeias de Markov (MCMC são usados para fazer inferência e validação. Uma estratégia para construção de modelos, baseada no uso do fator de Bayes, é proposta e aspectos de validação são extensivamente discutidos neste artigo, incluindo a distribuição a posteriori para o índice de concordância e análise de resíduos. A determinação de fatores de risco, baseados em variáveis disponíveis na chegada do paciente ao hospital, é muito importante para a tomada de decisão sobre o curso do tratamento. O modelo identificado se revela fortemente confiável e acurado, com uma taxa de classificação correta de 88% e um índice de concordância de 83%.
Akter, N; Akkadechanunt, T; Chontawan, R; Klunklin, A
This study examined the level of quality of work life and predictability of years of education, monthly income, years of experience, job stress, organizational commitment and work environment on quality of work life among nurses in tertiary-level hospitals in the People's Republic of Bangladesh. There is an acute shortage of nurses worldwide including Bangladesh. Quality of work life is important for quality of patient care and nurse retention. Nurses in Bangladesh are fighting to provide quality care for emerging health problems for the achievement of sustainable development goals. We collected data from 288 randomly selected registered nurses, from six tertiary-level hospitals. All nurses were requested to fill questionnaire consisted of Demographic Data Sheet, Quality of Nursing Work Life Survey, Expanded Nursing Stress Scale, Questionnaire of Organizational Commitment and Practice Environment Scale of the Nursing Work Index. Data were analysed by descriptive statistics and multiple regression. The quality of work life as perceived by nurses in Bangladesh was at moderate level. Monthly income was found as the best predictor followed by work environment, organizational commitment and job stress. A higher monthly income helps nurses to fulfil their personal needs; positive work environment helps to provide quality care to the patients. Quality of work life and predictors measured by self-report only may not reflect the original picture of the quality of work life among nurses. Findings provide information for nursing and health policymakers to develop policies to improve quality of work life among nurses that can contribute to quality of nursing care. This includes the working environment, commitment to the organization and measures to reduce job stress. © 2017 International Council of Nurses.
Chen, Shu-Huey; Yang, Shang-Hsien; Chu, Sung-Chao; Su, Yu-Chieh; Chang, Chu-Yu; Chiu, Ya-Wen; Kao, Ruey-Ho; Li, Dian-Kun; Yang, Kuo-Liang; Wang, Tso-Fu
Granulocyte colony-stimulating factor (G-CSF) is now widely used for stem cell mobilization. We evaluated the role of post-G-CSF white blood cell (WBC) counts and donor factors in predicting adverse events and yields associated with mobilization. WBC counts were determined at baseline, after the third and the fifth dose of G-CSF in 476 healthy donors. Donors with WBC ≥ 50 × 10(3)/μL post the third dose of G-CSF experienced more fatigue, myalgia/arthralgia, and chills, but final post-G-CSF CD34(+) cell counts were similar. Although the final CD34(+) cell count was higher in donors with WBC ≥ 50 × 10(3)/μL post the fifth G-CSF, the incidence of side effects was similar. Females more frequently experienced headache, nausea/anorexia, vomiting, fever, and lower final CD34(+) cell count than did males. Donors with body mass index (BMI) ≥ 25 showed higher incidences of sweat and insomnia as well as higher final CD34(+) cell counts. Donor receiving G-CSF ≥ 10 μg/kg tended to experience bone pain, headache and chills more frequently. Multivariate analysis indicated that female gender is an independent factor predictive of the occurrence of most side effects, except for ECOG > 1 and chills. Higher BMI was also an independent predictor for fatigue, myalgia/arthralgia, and sweat. Higher G-CSF dose was associated with bone pain, while the WBC count post the third G-CSF was associated with fatigue only. In addition, one donor in the study period did not complete the mobilization due to suspected anaphylactoid reaction. Observation for 1 h after the first injection of G-CSF is required to prevent complications from unpredictable side effects.
Barker, Anna; Sethi, Ajay; Shulkin, Emily; Caniza, Rachell; Zerbel, Sara; Safdar, Nasia
We examine factors associated with hand hygiene practices of hospital patients. Hygiene decreased compared to at home, and home practices were strongly associated with hospital practices. Understanding and leveraging the intrinsic value some patients associate with hand hygiene may be important for improving overall hospital hygiene and decreasing healthcare-associated infections.
Ravi Chethan Kumar A. N
Full Text Available BACKGROUND Acute exacerbation of chronic obstructive pulmonary disease being an all too common cause for hospital admissions Worldwide poses a logistical stress for the treating physicians and hospital administration with regards to morbidity and mortality rates. Identifying upon admission those at higher risk of dying in-hospital could be useful for triaging patients to the appropriate level of care, determining the aggressiveness of therapies and timing safe discharges. The aim of this study was to evaluate the utilisation of the DECAF score in predicting in hospital outcome in patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD in a Tertiary Care Hospital of Southern India. MATERIALS AND METHODS Patients admitted with COPD exacerbations in K.R. Hospital, Mysore Medical College And Research Institute, Mysuru in between the May 2017 and July 2017 were taken has study subjects. A total of 80 patients were taken into the study. The duration of hospital stay, ICU admission and deaths were noted. DECAF score is applied to all study subjects and the severity of AECOPD is graded at the time of admission. The data collected and complied were then analysed for the correlation between score and subsequent management and overall outcome. RESULTS Total of 80 patients were recruited in the study. Mean age for male was 66.47, female was 70.86. Length of hospital stay was more in patients with decaf score more than 3 (average hospital stay 10 days. Patients with DECAF score of 2, 70.4% required inhalations oxygen, remaining 29.6% were managed with only bronchodilators whereas patients with DECAF score of 5 (max score in our study group there was a 100% initiation of assisted ventilation 33.3% received NIV ventilation while 66.6% required endotracheal intubation with ventilator support. In present study, 85 percent patients were survived. Total 6 patients (7.5% had died, belonging to high risk DECAF group (score 3 to 6
Skovgaard, Marlene; Schønheyder, Henrik Carl; Benfield, Thomas
Little is known about the clinical presentation and outcome of pneumococcal lower respiratory tract infection (LRTI) without positive chest X-ray findings and blood cultures. We investigated the prognostic impact of a pulmonary infiltrate and bacteraemia on the clinical course of hospitalized...
Full Text Available Background. Malnutrition has serious implications for recovery after surgery. Early detection of malnutrition with nutritional support minimizes postoperative complications. Nutritional assessment tools need to be simple and suitable for use in everyday practice. In our study we wanted to determine, how many patients might benefit from nutritional support.Methods. From April to August 1999 fifty consecutively admitted patients predicted to major abdominal surgery have been examined. We used Mini nutritional assessment (MNA, Buzby’s nutrition risk index (NRI, blood albumin level and weight loss in the last 3 months period prior to the examination, to assess nutritional status.Results. We examined 50 patients (27 males and 23 females, age 76.5 ± 16.5 and confirmed malnutrition in 40% of patients with MNA and serum albumin level. The increased risk for nutrition-associated complications was confirmed by NRI and weight loss in 44%.Conclusions. A confident diagnosis of malnutrition and increased risk for nutrition-associated complications can be established by using a combination of simple methods like MNA, NRI, weight loss and serum albumin level. Almost half of the patients admitted for major abdominal surgery in General hospital Celje suffer from malnutrition and they may benefit with early nutritional intervention.
Hicks, Caitlin W; Tosoian, Jeffrey J; Craig-Schapiro, Rebecca; Valero, Vicente; Cameron, John L; Eckhauser, Frederic E; Hirose, Kenzo; Makary, Martin A; Pawlik, Timothy M; Ahuja, Nita; Weiss, Matthew J; Wolfgang, Christopher L
The purpose of this study was to investigate the prognostic significance of early (30-day) hospital readmission (EHR) on mortality after pancreatectomy. Using a prospectively collected institutional database linked with a statewide dataset, we evaluated the association between EHR and overall mortality in all patients undergoing pancreatectomy at our tertiary institution (2005 to 2010). Of 595 pancreatectomy patients, EHR occurred in 21.5%. Overall mortality was 29.4% (median follow-up 22.7 months). Patients with EHR had decreased survival compared with those who were not readmitted (P = .011). On multivariate analysis adjusting for baseline group differences, EHR for gastrointestinal-related complications was a significant independent predictor of mortality (hazard ratio 2.30, P = .001). In addition to known risk factors, 30-day readmission for gastrointestinal-related complications following pancreatectomy independently predicts increased mortality. Additional studies are necessary to identify surgical, medical, and social factors contributing to EHR, as well as interventions aimed at decreasing postpancreatectomy morbidity and mortality. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available Aim: to evaluate the role of clinical characteristics, functional markers of vasodilation, inflammatory response, and atherosclerosis in predicting wound healing in diabetic foot ulcer. Methods: a cohort study (February – October 2010 was conducted from 40 subjects with acute diabetic foot ulcer at clinical ward of Dr. Cipto Mangunkusumo National Central General Hospital, Jakarta, Indonesia. Each subject underwent at least two variable measurements, i.e. during inflammatory phase and proliferation phase. The studied variables were clinical characteristics, complete peripheral blood count (CBC and differential count, levels of HbA1c, ureum, creatinine, lipid profile, fasting blood glucose (FBG, marker of endothelial dysfunction (asymmetric dimethylarginine/ADMA, endothelin-1/ET-1, and flow-mediated dilation/FMD of brachial artery, and marker of vascular calcification (osteoprotegerin/OPG. Results: median of time achieving 50% granulation tissue in our study was 21 days. There were nine factors that contribute in the development of 50% granulation tissue, i.e. family history of diabetes mellitus (DM, previous history of wound, wound area, duration of existing wound, captopril and simvastatin medications, levels of ADMA, ET-1, and OPG. There were three out of the nine factors that significantly correlated with wound healing, i.e. wound area, OPG levels, and simvastatin medications. Conclusion: in acute diabetic foot ulcers, wound area and OPG levels had positive correlation with wound healing, whereas simvastatin medications had negative correlation with wound healing.
Soewondo, Pradana; Suyono, Slamet; Sastrosuwignyo, Mpu Kanoko; Harahap, Alida R; Sutrisna, Bambang; Makmun, Lukman H
to evaluate the role of clinical characteristics, functional markers of vasodilation, inflammatory response, and atherosclerosis in predicting wound healing in diabetic foot ulcer. a cohort study (February - October 2010) was conducted from 40 subjects with acute diabetic foot ulcer at clinical ward of Dr. Cipto Mangunkusumo National Central General Hospital, Jakarta, Indonesia. Each subject underwent at least two variable measurements, i.e. during inflammatory phase and proliferation phase. The studied variables were clinical characteristics, complete peripheral blood count (CBC) and differential count, levels of HbA1c, ureum, creatinine, lipid profile, fasting blood glucose (FBG), marker of endothelial dysfunction (asymmetric dimethylarginine/ADMA, endothelin-1/ET-1, and flow-mediated dilation/FMD of brachial artery), and marker of vascular calcification (osteoprotegerin/OPG). median of time achieving 50% granulation tissue in our study was 21 days. There were nine factors that contribute in the development of 50% granulation tissue, i.e. family history of diabetes mellitus (DM), previous history of wound, wound area, duration of existing wound, captopril and simvastatin medications, levels of ADMA, ET-1, and OPG. There were three out of the nine factors that significantly correlated with wound healing, i.e. wound area, OPG levels, and simvastatin medications. in acute diabetic foot ulcers, wound area and OPG levels had positive correlation with wound healing, whereas simvastatin medications had negative correlation with wound healing.
Prevention of Hospital-Acquired Adverse Drug Reactions in Older People Using Screening Tool of Older Persons' Prescriptions and Screening Tool to Alert to Right Treatment Criteria: A Cluster Randomized Controlled Trial.
O'Connor, Marie N; O'Sullivan, David; Gallagher, Paul F; Eustace, Joseph; Byrne, Stephen; O'Mahony, Denis
To determine whether use of the Screening Tool of Older Persons' Prescriptions (STOPP) and Screening Tool to Alert to Right Treatment (START) criteria reduces incident hospital-acquired adverse drug reactions (ADRs), 28-day medication costs, and median length of hospital stay in older adults admitted with acute illness. Single-blind cluster randomized controlled trial (RCT) of unselected older adults hospitalized over a 13-month period. Tertiary referral hospital in southern Ireland. Consecutively admitted individuals aged 65 and older (N = 732). Single time point presentation to attending physicians of potentially inappropriate medications according to the STOPP/START criteria. The primary outcome was the proportion of participants experiencing one or more ADRs during the index hospitalization. Secondary outcomes were median length of stay (LOS) and 28-day total medication cost. One or more ADRs occurred in 78 of the 372 control participants (21.0%; median age 78, interquartile range (IQR) 72-84) and in 42 of the 360 intervention participants (11.7%; median age 80, IQR 73-85) (absolute risk reduction = 9.3%, number needed to treat = 11). The median LOS in the hospital was 8 days (IQR 4-14 days) in both groups. At discharge, median medication cost was significantly lower in the intervention group (€73.16, IQR €38.68-121.72) than in the control group (€90.62, IQR €49.38-162.53) (Wilcoxon rank test Z statistic = -3.274, P older adults but did not affect median LOS. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Varan, Hacer Dogan; Bolayir, Basak; Kara, Ozgur; Arik, Gunes; Kizilarslanoglu, Muhammet Cemal; Kilic, Mustafa Kemal; Sumer, Fatih; Kuyumcu, Mehmet Emin; Yesil, Yusuf; Yavuz, Burcu Balam; Halil, Meltem; Cankurtaran, Mustafa
Phase angle (PhA) value determined by bioelectrical impedance analysis (BIA) is an indicator of cell membrane damage and body cell mass. Recent studies have shown that low PhA value is associated with increased nutritional risk in various group of patients. However, there have been only a few studies performed globally assessing the relationship between nutritional risk and PhA in hospitalized geriatric patients. The aim of the study is to evaluate the predictive value of the PhA for malnutrition risk in hospitalized geriatric patients. One hundred and twenty-two hospitalized geriatric patients were included in this cross-sectional study. Comprehensive geriatric assessment tests and BIA measurements were performed within the first 48 h after admission. Nutritional risk state of the patients was determined with NRS-2002. Phase angle values of the patients with malnutrition risk were compared with the patients that did not have the same risk. The independent variables for predicting malnutrition risk were determined. SPSS version 15 was utilized for the statistical analyzes. The patients with malnutrition risk had significantly lower phase angle values than the patients without malnutrition risk (p = 0.003). ROC curve analysis suggested that the optimum PhA cut-off point for malnutrition risk was 4.7° with 79.6 % sensitivity, 64.6 % specificity, 73.9 % positive predictive value, and 73.9 % negative predictive value. BMI, prealbumin, PhA, and Mini Mental State Examination Test scores were the independent variables for predicting malnutrition risk. PhA can be a useful, independent indicator for predicting malnutrition risk in hospitalized geriatric patients.
Full Text Available Background: The impact of sequential vein bypass grafting on clinical outcomes is less known in off-pump coronary artery bypass grafting (CABG. We aimed to evaluate the effects of sequential vein bypass grafting on clinical outcomes in off-pump CABG. Methods: From October 2009 to September 2013 at the Fuwai Hospital, 127 patients with at least one sequential venous graft were matched with 127 patients of individual venous grafts only, using propensity score matching method to obtain risk-adjusted outcome comparison. In-hospital measurement was composite outcome of in-hospital death, myocardial infarction (MI, stroke, requirement for intra-aortic ballon pump (IABP assistance and prolonged ventilation. Major adverse cardiac events (MACEs: Death, MI or repeat revascularization and angina recurrence were considered as mid-term endpoints. Results: No significant difference was observed among the groups in baseline characteristics. Intraoperative mean blood flow per vein graft was 40.4 ml in individual venous grafts groups versus 59.5 ml in sequential venous grafts groups (P < 0.001. There were no differences between individual and sequential venous grafts groups with regard to composite outcome of in-hospital mortality, MI, stroke, IABP assistance and prolonged ventilation (11.0% vs. 14.2%, P = 0.45. Individual in-hospital measurement also did not differ significantly between the two groups. At about four years follow-up, the survival estimates free from MACEs (92.5% vs. 97.3%, P = 0.36 and survival rates free of angina recurrence (80.9% vs. 85.5%, P = 0.48 were similar among individual and sequential venous grafts groups with a mean follow-up of 22.5 months. In the Cox regression analysis, sequential vein bypass grafting was not identified as an independent predictor of both MACEs and angina recurrence. Conclusions: Compared to individual vein bypass grafting, sequential vein bypass grafting was not associated with an increase of either in-hospital
[Interpersonal relationships: perception of the communication, treatment and adverse experiences encountered by users of medical units that belong to the Coordinating Commission of the National Institutes of Health and High Specialty Hospitals (CCINSHAE)].
Arroyo-Valerio, América Guadalupe Guadalupe; Cortés-Poza, David; Aguirre Hernández, Rebeca; Fuentes García, Ruth; Ramírez de la Roche, Omar Fernando; Hamui Sutton, Alicia
User's perception with regard to the attention they received in healthcare units is increasingly being taken into account by the health service providers in order to improve the quality of their service. Describe how the users perceive the health services provided by the CCINSHAE with regard to the communication with the physicians, the attention of the staff and the adverse personal and institutional experiences and to explore their relation with user's demographic characteristics, health condition, physical limitations to carry out daily activities and service area. A questionnaire was designed to collect information about the user and his/her opinion with regard to the healthcare units, the communication with the physicians, the attention of the staff and the adverse personal and institutional experiences. The data were analyzed with STATA using sample weights. A total of 2,176 individuals were interviewed after they had received attention and represent a population of 1,457,964 users, over 6 months, of the CCINSAHE. We then calculated four binary variables that reflect the perception of the users. These four variables were significantly associated with the type of health unit where the user received attention, schooling, limitations to carry out daily activities, facilities provided to the relatives, family income, the use of alternative medicine, and the area of attention. A fundamental aspect of the service provided by the healthcare institutions is the communication between the physicians and the users. We found that the perception of the users with regard to the communication with the physician, the attention of the staff, and the adverse personal and institutional experiences was associated with the type of healthcare unit. The federal reference hospitals produced the most unfavorable perception while the regional hospitals produced the most favorable impression. This study enables the decision-making personnel to determine what needs to be modified in
Leontyev, Sergey; Légaré, Jean-Francois; Borger, Michael A; Buth, Karen J; Funkat, Anne K; Gerhard, Jochann; Mohr, Friedrich W
This study evaluated preoperative predictors of in-hospital death for the surgical treatment of patients with acute type A aortic dissection (Type A) and created an easy-to-use scorecard to predict in-hospital death. We reviewed retrospectively all consecutive patients who underwent operations for acute Type A between 1996 and 2011 at 2 tertiary care institutions. A logistic regression model was created to identify independent preoperative predictors of in-hospital death. The results were used to create a scorecard predicting operative risk. Emergency operations were performed in 534 consecutive patients for acute Type A. Mean age was 61 ± 14 years and 36.3% were women. Critical preoperative state was present in 31% of patients and malperfusion of one or more end organs in 36%. Unadjusted in-hospital mortality was 18.7% and not significantly different between institutions. Independent predictors of in-hospital death were age 50 to 70 years (odds ratio [OR], 3.8; p = 0.001), age older than 70 years (OR, 2.8; p = 0.03), critical preoperative state (OR, 3.2; p risk score based on these variables. The patients were stratified into four risk categories predicting in-hospital death: less than 10%, 10% to 25%, 25% to 50%, and more than 50%. This represents one of the largest series of patients with Type A in which a risk model was created. Using our approach, we have shown that age, critical preoperative state, and malperfusion syndrome were strong independent risk factors for early death and could be used for the preoperative risk assessment. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Full Text Available Introduction: Our goal was to develop and validate an index to predict in-hospital mortality in older adults after non-traumatic emergency department (ED intubations. Methods: We used Vizient administrative data from hospitalizations of 22,374 adults ≥75 years who underwent non-traumatic ED intubation from 2008–2015 at nearly 300 U.S. hospitals to develop and validate an index to predict in-hospital mortality. We randomly selected one half of participants for the development cohort and one half for the validation cohort. Considering 25 potential predictors, we developed a multivariable logistic regression model using least absolute shrinkage and selection operator method to determine factors associated with in-hospital mortality. We calculated risk scores using points derived from the final model’s beta coefficients. To evaluate calibration and discrimination of the final model, we used Hosmer-Lemeshow chi-square test and receiver-operating characteristic analysis and compared mortality by risk groups in the development and validation cohorts. Results: Death during the index hospitalization occurred in 40% of cases. The final model included six variables: history of myocardial infarction, history of cerebrovascular disease, history of metastatic cancer, age, admission diagnosis of sepsis, and admission diagnosis of stroke/ intracranial hemorrhage. Those with low-risk scores (10 had 58% risk of in-hospital mortality. The Hosmer-Lemeshow chi-square of the model was 6.47 (p=0.09, and the c-statistic was 0.62 in the validation cohort. Conclusion: The model may be useful in identifying older adults at high risk of death after ED intubation.
Jerrett K Lau
Full Text Available Pulmonary embolism continues to be a significant cause of death. The aim was to derive and validate a risk prediction model for in-hospital death after acute pulmonary embolism to identify low risk patients suitable for outpatient management.A confirmed acute pulmonary embolism database of 1,426 consecutive patients admitted to a tertiary-center (2000-2012 was analyzed, with odd and even years as derivation and validation cohorts respectively. Risk stratification for in-hospital death was performed using multivariable logistic-regression modelling. Models were compared using receiver-operating characteristic-curve and decision curve analyses.In-hospital mortality was 3.6% in the derivation cohort (n = 693. Adding day-1 sodium and bicarbonate to simplified Pulmonary Embolism Severity Index (sPESI significantly increased the C-statistic for predicting in-hospital death (0.71 to 0.86, P = 0.001. The validation cohort yielded similar results (n = 733, C-statistic 0.85. The new model was associated with a net reclassification improvement of 0.613, and an integrated discrimination improvement of 0.067. The new model also increased the C-statistic for predicting 30-day mortality compared to sPESI alone (0.74 to 0.83, P = 0.002. Decision curve analysis demonstrated superior clinical benefit with the use of the new model to guide admission for pulmonary embolism, resulting in 43 fewer admissions per 100 presentations based on a risk threshold for admission of 2%.A risk model incorporating sodium, bicarbonate, and the sPESI provides accurate risk prediction of acute in-hospital mortality after pulmonary embolism. Our novel model identifies patients with pulmonary embolism who are at low risk and who may be suitable for outpatient management.
Linnen, Daniel T; Kornak, John; Stephens, Caroline
Evidence suggests an association between rurality and decreased life expectancy. To determine whether rural hospitals have higher hospital mortality, given that very sick patients may be transferred to regional hospitals. In this ecologic study, we combined Medicare hospital mortality ratings (N = 1267) with US census data, critical access hospital classification, and National Center for Health Statistics urban-rural county classifications. Ratings included mortality for coronary artery bypass grafting, stroke, chronic obstructive pulmonary disease, heart attack, heart failure, and pneumonia across 277 California hospitals between July 2011 and June 2014. We used generalized estimating equations to evaluate the association of urban-rural county classifications on mortality ratings. Unfavorable Medicare hospital mortality rating "worse than the national rate" compared with "better" or "same." Compared with large central "metro" (metropolitan) counties, hospitals in medium-sized metro counties had 6.4 times the odds of rating "worse than the national rate" for hospital mortality (95% confidence interval = 2.8-14.8, p centers may contribute to these results, a potential factor that future research should examine.
Tran, Mark W; Weiland, Tracey J; Phillips, Georgina A
Psychosocial factors such as marital status (odds ratio, 3.52; 95% confidence interval, 1.43-8.69; P = .006) and nonclinical factors such as outpatient nonattendances (odds ratio, 2.52; 95% confidence interval, 1.22-5.23; P = .013) and referrals made (odds ratio, 1.20; 95% confidence interval, 1.06-1.35; P = .003) predict hospital utilization for patients in a chronic disease management program. Along with optimizing patients' clinical condition by prescribed medical guidelines and supporting patient self-management, addressing psychosocial and nonclinical issues are important in attempting to avoid hospital utilization for people with chronic illnesses.
Ghasemzadeh, Nima; Hritani, Abdul Wahab; De Staercke, Christine; Eapen, Danny J; Veledar, Emir; Al Kassem, Hatem; Khayata, Mohamed; Zafari, A Maziar; Sperling, Laurence; Hooper, Craig; Vaccarino, Viola; Mavromatis, Kreton; Quyyumi, Arshed A
Stromal derived factor-1α/CXCL12 is a chemoattractant responsible for homing of progenitor cells to ischemic tissues. We aimed to investigate the association of plasma CXCL12 with long-term cardiovascular outcomes in patients with coronary artery disease (CAD). 785 patients aged: 63 ± 12 undergoing coronary angiography were independently enrolled into discovery (N = 186) and replication (N = 599) cohorts. Baseline levels of plasma CXCL12 were measured using Quantikine CXCL12 ELISA assay (R&D systems). Patients were followed for cardiovascular death and/or myocardial infarction (MI) for a mean of 2.6 yrs. Cox proportional hazard was used to determine independent predictors of cardiovascular death/MI. The incidence of cardiovascular death/MI was 13% (N = 99). High CXCL12 level based on best discriminatory threshold derived from the ROC analysis predicted risk of cardiovascular death/MI (HR = 4.81, p = 1 × 10(-6)) independent of traditional risk factors in the pooled cohort. Addition of CXCL12 to a baseline model was associated with a significant improvement in c-statistic (AUC: 0.67-0.73, p = 0.03). Addition of CXCL12 was associated with correct risk reclassification of 40% of events and 10.5% of non-events. Similarly for the outcome of cardiovascular death, the addition of the CXCL12 to the baseline model was associated with correct reclassification of 20.7% of events and 9% of non-events. These results were replicated in two independent cohorts. Plasma CXCL12 level is a strong independent predictor of adverse cardiovascular outcomes in patients with CAD and improves risk reclassification. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Oud, Frederike M M; de Rooij, Sophia E J A; Schuurman, Truus; Duijvelaar, Karlijn M; van Munster, Barbara C
To determine the predictive value of safety management system (VMS) screening questions for falling, delirium, and mortality, as punt down in the VMS theme 'Frail elderly'. Retrospective observational study. We selected all patients ≥ 70 years who were admitted to non-ICU wards at the Deventer Hospital, the Netherlands, for at least 24 hours between 28 March 2011 and 10 June 2011. On admission, patients were screened with the VMS instrument by a researcher. Delirium and falls were recorded during hospitalisation. Six months after hospitalisation, data on mortality were collected. We included 688 patients with a median age of 78.7 (range: 70.0-97.1); 50.7% was male. The sensitivity of the screening for delirium risk was 82%, the specificity 62%. The sensitivity of the screening for risk of falling was 63%, the specificity 65%. Independent predictors for mortality within 6 months were delirium risk (odds ratio (OR): 2.3; 95% CI 1.1-3.2), malnutrition (OR: 2.1; 95% CI 1.3-3.5), admission to a non-surgical ward (OR: 3.0; 95% CI 1.8-5.1), and older age (OR: 1.1; 95%CI 1.0-1.1). Patients classified by the VMS theme 'Frail elderly' as having more risk factors had a higher risk of dying (p instrument for identifying those elderly people with a high risk of developing this condition; the VMS sensitivity for fall risk is moderate. The number of positive VMS risk factors correlates with mortality and may therefore be regarded as a measure of frailty.
LaBute, Montiago X; Zhang, Xiaohua; Lenderman, Jason; Bennion, Brian J; Wong, Sergio E; Lightstone, Felice C
Late-stage or post-market identification of adverse drug reactions (ADRs) is a significant public health issue and a source of major economic liability for drug development. Thus, reliable in silico screening of drug candidates for possible ADRs would be advantageous. In this work, we introduce a computational approach that predicts ADRs by combining the results of molecular docking and leverages known ADR information from DrugBank and SIDER. We employed a recently parallelized version of AutoDock Vina (VinaLC) to dock 906 small molecule drugs to a virtual panel of 409 DrugBank protein targets. L1-regularized logistic regression models were trained on the resulting docking scores of a 560 compound subset from the initial 906 compounds to predict 85 side effects, grouped into 10 ADR phenotype groups. Only 21% (87 out of 409) of the drug-protein binding features involve known targets of the drug subset, providing a significant probe of off-target effects. As a control, associations of this drug subset with the 555 annotated targets of these compounds, as reported in DrugBank, were used as features to train a separate group of models. The Vina off-target models and the DrugBank on-target models yielded comparable median area-under-the-receiver-operating-characteristic-curves (AUCs) during 10-fold cross-validation (0.60-0.69 and 0.61-0.74, respectively). Evidence was found in the PubMed literature to support several putative ADR-protein associations identified by our analysis. Among them, several associations between neoplasm-related ADRs and known tumor suppressor and tumor invasiveness marker proteins were found. A dual role for interstitial collagenase in both neoplasms and aneurysm formation was also identified. These associations all involve off-target proteins and could not have been found using available drug/on-target interaction data. This study illustrates a path forward to comprehensive ADR virtual screening that can potentially scale with increasing number
Montiago X LaBute
Full Text Available Late-stage or post-market identification of adverse drug reactions (ADRs is a significant public health issue and a source of major economic liability for drug development. Thus, reliable in silico screening of drug candidates for possible ADRs would be advantageous. In this work, we introduce a computational approach that predicts ADRs by combining the results of molecular docking and leverages known ADR information from DrugBank and SIDER. We employed a recently parallelized version of AutoDock Vina (VinaLC to dock 906 small molecule drugs to a virtual panel of 409 DrugBank protein targets. L1-regularized logistic regression models were trained on the resulting docking scores of a 560 compound subset from the initial 906 compounds to predict 85 side effects, grouped into 10 ADR phenotype groups. Only 21% (87 out of 409 of the drug-protein binding features involve known targets of the drug subset, providing a significant probe of off-target effects. As a control, associations of this drug subset with the 555 annotated targets of these compounds, as reported in DrugBank, were used as features to train a separate group of models. The Vina off-target models and the DrugBank on-target models yielded comparable median area-under-the-receiver-operating-characteristic-curves (AUCs during 10-fold cross-validation (0.60-0.69 and 0.61-0.74, respectively. Evidence was found in the PubMed literature to support several putative ADR-protein associations identified by our analysis. Among them, several associations between neoplasm-related ADRs and known tumor suppressor and tumor invasiveness marker proteins were found. A dual role for interstitial collagenase in both neoplasms and aneurysm formation was also identified. These associations all involve off-target proteins and could not have been found using available drug/on-target interaction data. This study illustrates a path forward to comprehensive ADR virtual screening that can potentially scale with
Mattos Luiz Alberto
Full Text Available OBJECTIVE: To verify the results after the performance of primary coronary angioplasty in Brazil in the last 4 years. METHODS: During the first 24 hours of acute myocardial infarction onset, 9,434 (12.2% patients underwent primary PTCA. We analyzed the success and occurrence of major in-hospital events, comparing them over the 4-year period. RESULTS: Primary PTCA use increased compared with that of all percutaneous interventions (1996=10.6% vs. 2000=13.1%; p<0.001. Coronary stent implantation increased (1996=20% vs. 2000=71.9%; p<0.001. Success was greater (1998=89.5% vs. 1999=92.5%; p<0.001. Reinfarction decreased (1998=3.9% vs. 99=2.4% vs. 2000=1.5%; p<0.001 as did emergency bypass surgery (1996=0.5% vs. 2000=0.2%; p=0.01. In-hospital deaths remained unchanged (1996=5.7% vs. 2000=5.1%, p=0.53. Balloon PTCA was one of the independent predictors of a higher rate of unsuccessful procedures (odds ratio 12.01 [CI=95%] 1.58-22.94, and stent implantation of lower mortality rates (odds ratio 4.62 [CI=95%] 3.19-6.08. CONCLUSION: The success rate has become progressively higher with a significant reduction in reinfarction and urgent bypass surgery, but in-hospital death remains nearly unchanged. Coronary stenting was a predictor of a lower death rate, and balloon PTCA was associated with greater procedural failure.
Does Hospitalization Predict the Disease Course in Ulcerative Colitis? Prevalence and Predictors of Hospitalization and Re-Hospitalization in Ulcerative Colitis in a Population-based Inception Cohort (2000-2012).
Golovics, Petra A; Lakatos, Laszlo; Mandel, Michael D; Lovasz, Barbara D; Vegh, Zsuzsanna; Kurti, Zsuzsanna; Szita, Istvan; Kiss, Lajos S; Balogh, Mihaly; Pandur, Tunde; Lakatos, Peter L
Limited data are available on the hospitalization rates in population-based studies. Since this is a very important outcome measure, the aim of this study was to analyze prospectively if early hospitalization is associated with the later disease course as well as to determine the prevalence and predictors of hospitalization and re-hospitalization in the population-based ulcerative colitis (UC) inception cohort in the Veszprem province database between 2000 and 2012. Data of 347 incident UC patients diagnosed between January 1, 2000 and December 31, 2010 were analyzed (M/F: 200/147, median age at diagnosis: 36, IQR: 26-50 years, follow-up duration: 7, IQR 4-10 years). Both in- and outpatient records were collected and comprehensively reviewed. Probabilities of first UC-related hospitalization were 28.6%, 53.7% and 66.2% and of first re-hospitalization were 23.7%, 55.8% and 74.6% after 1-, 5- and 10- years of follow-up, respectively. Main UC-related causes for first hospitalization were diagnostic procedures (26.7%), disease activity (22.4%) or UC-related surgery (4.8%), but a significant percentage was unrelated to IBD (44.8%). In Kaplan-Meier and Cox-regression analysis disease extent at diagnosis (HR extensive: 1.79, p=0.02) or at last follow-up (HR: 1.56, p=0.001), need for steroids (HR: 1.98, p<0.001), azathioprine (HR: 1.55, p=0.038) and anti-TNF (HR: 2.28, p<0.001) were associated with the risk of UC-related hospitalization. Early hospitalization was not associated with a specific disease phenotype or outcome; however, 46.2% of all colectomies were performed in the year of diagnosis. Hospitalization and re-hospitalization rates were relatively high in this population-based UC cohort. Early hospitalization was not predictive for the later disease course.
Ritter, Anne C; Wagner, Amy K; Szaflarski, Jerzy P; Brooks, Maria M; Zafonte, Ross D; Pugh, Mary Jo V; Fabio, Anthony; Hammond, Flora M; Dreer, Laura E; Bushnik, Tamara; Walker, William C; Brown, Allen W; Johnson-Greene, Doug; Shea, Timothy; Krellman, Jason W; Rosenthal, Joseph A
Posttraumatic seizures (PTS) are well-recognized acute and chronic complications of traumatic brain injury (TBI). Risk factors have been identified, but considerable variability in who develops PTS remains. Existing PTS prognostic models are not widely adopted for clinical use and do not reflect current trends in injury, diagnosis, or care. We aimed to develop and internally validate preliminary prognostic regression models to predict PTS during acute care hospitalization, and at year 1 and year 2 postinjury. Prognostic models predicting PTS during acute care hospitalization and year 1 and year 2 post-injury were developed using a recent (2011-2014) cohort from the TBI Model Systems National Database. Potential PTS predictors were selected based on previous literature and biologic plausibility. Bivariable logistic regression identified variables with a p-value models. Multivariable logistic regression modeling with backward-stepwise elimination was used to determine reduced prognostic models and to internally validate using 1,000 bootstrap samples. Fit statistics were calculated, correcting for overfitting (optimism). The prognostic models identified sex, craniotomy, contusion load, and pre-injury limitation in learning/remembering/concentrating as significant PTS predictors during acute hospitalization. Significant predictors of PTS at year 1 were subdural hematoma (SDH), contusion load, craniotomy, craniectomy, seizure during acute hospitalization, duration of posttraumatic amnesia, preinjury mental health treatment/psychiatric hospitalization, and preinjury incarceration. Year 2 significant predictors were similar to those of year 1: SDH, intraparenchymal fragment, craniotomy, craniectomy, seizure during acute hospitalization, and preinjury incarceration. Corrected concordance (C) statistics were 0.599, 0.747, and 0.716 for acute hospitalization, year 1, and year 2 models, respectively. The prognostic model for PTS during acute hospitalization did not
Short-term effects of air pollution, markers of endothelial activation, and coagulation to predict major adverse cardiovascular events in patients with acute coronary syndrome: insights from AIRACOS study.
Dominguez-Rodriguez, Alberto; Abreu-Gonzalez, Pedro; Rodríguez, Sergio; Avanzas, Pablo; Juarez-Prera, Ruben A
The aim of this study was to determine whether markers of inflammation and coagulation are associated with short-term particulate matter exposure and predict major adverse cardiovascular events at 360 d in patients with acute coronary syndrome (ACS). We included 307 consecutive patients, and assessed the average concentrations of data on atmospheric pollution in ambient air and meteorological variables from 1 d up to 7 d prior to admission. In patients with ACS, the markers of endothelial activation and coagulation, but not black carbon exposure, are associated with major adverse cardiovascular events at one-year follow-up.
... for Biologics Evaluation & Research Vaccine Adverse Events Vaccine Adverse Events Share Tweet Linkedin Pin it More sharing ... in the primary immunization series in infants Report Adverse Event Report a Vaccine Adverse Event Contact FDA ( ...
Jamaluddin, M.; Hussain, S.M.A.; Ahmad, H.
Objective: To study the role of hyperbilirubinaemia as a predictive factor for appendiceal perforation in acute appendicitis. Methods: The prospective, descriptive study was conducted at the Abbasi Shaheed Hospital and the Karachi Medical and Dental College, Karachi, from January 2010 to June 2012. It comprised all patients coming to the surgical outpatient department and emergency department with pain in the right iliac fossa with duration less than seven days. They were clinically assessed for signs and symptoms of acute appendicitis and relevant tests were conducted. Patients were diagnosed as a case of acute appendicitis on the basis of clinical and ultrasound findings, and were prepared for appendicectomy. Per-operative findings were recorded and specimens were sent for histopathology to confirm the diagnosis. SPSS version 10 was used to analyse the data. Results: Of the 71 patients, 37 (52.10%) were male and 34 (47.90%) were female. The age range was 3-57 years, and most of the patients (n=33; 46.5%) were between 11 and 20 years. Besides, 63 (89%) patients had pain in the right iliac fossa of less than four-days duration, while 8 (11%) had pain of longer duration. Total leukocyte count was found to be elevated in 33 (46.5%) patients, while total serum bilirubin was elevated in 41 (57.70%). Ultrasound of abdomen showed 9 (12.70%) patients having normal appearance of appendix and 59 (83.30%) had inflamed appendix. Four (5.60%) patients had no signs of inflammation on naked eye appearance per operatively. Histopathology of appendix showed 10 (14.10%) patients had non-inflammatory appendix. Conclusion: Patients with signs and symptoms of acute appendicitis and a raised total serum bilirubin level indicated a complication of acute appendicitis requiring an early intervention to prevent peritonitis and septicaemia. A raised serum bilirubin level is a good indicator of complicated acute appendicitis, and should be included in the assessment of patients with
Aliberti, Stefano; Bellelli, Giuseppe; Belotti, Mauro; Morandi, Alessandro; Messinesi, Grazia; Annoni, Giorgio; Pesci, Alberto
Delirium is common in critically ill patients and impact in-hospital mortality in patients with pneumonia. The aim of the study was to evaluate the prevalence of delirium symptoms during hospitalization in patients with severe pneumonia and their impact on one-year mortality. This was an observational, retrospective, cohort study of consecutive patients admitted to the respiratory high dependency unit of the San Gerardo University Hospital, Monza, Italy, between January 2009 and December 2012 with a diagnosis of severe pneumonia. A search through the charts looking for ten key words associated with delirium (confusion, disorientation, altered mental status, delirium, agitation, inappropriate behavior, mental status change, inattention, hallucination, lethargy) was performed by a multidisciplinary team. The primary endpoint was mortality at one-year follow-up. Secondary endpoint was in-hospital mortality. A total of 172 patients were enrolled (78 % males; median age 75 years). At least one delirium symptom was detected in 53 patients (31 %) during hospitalization. The prevalence of delirium symptoms was higher among those who died during hospitalization vs. those who survived (44 vs. 27 %, p = 0.049, respectively). Seventy-one patients (46 %) died during the one-year follow-up. The prevalence of at least one delirium symptom was higher among those who died than those who survived during the one-year follow-up (39 vs. 21 %, p = 0.014, respectively). At the multivariable logistic regression analysis, after adjustment for age, comorbidities and severe sepsis, the presence of at least one delirium symptom during hospitalization was an independent predictor of one-year mortality (OR 2.35; 95 % CI 1.13-4.90; p = 0.023). Delirium symptoms are independent predictors of one-year mortality in hospitalized patients with severe pneumonia. Further studies should confirm our results using prospective methods of collecting data.
Samad Shams Vahdati
Full Text Available Introduction: A high grade burn is one of the most devastating injuries with several medical, social, economic, and psychological effects. These injuries are the most common cause of accidental deaths after traffic injuries in both the developed and developing countries. Therefore this research was aimed to determine demographic characteristics of patients with burn injury admitted to the emergency department and identify predictive factors of hospitalization. Methods: This is a cross sectional descriptive study, which is done in 20 March up to 20 September 2011 in emergency department of Sina Hospital, Tabriz, Iran. Patients’ information including demographic characteristic, cause of burn, place of accident, anatomical areas burned, grading and percent of burning and disposition were gathered and analyzed using SPSS version 18.0 statistical software. Stepwise multivariate regression analysis was used for recognition of independent predictive factors of hospitalization in burned patients. Results: One hundred and sixty patients were enrolled (54.4% female. The average age of those was 20.47±13.5 years. The prevalence of burn was significantly higher in ages under 20 years (p<0.001. Lower limb (37.5%, head and neck (21.25% and upper limb (17.5% were three frequent site of burn. The most common cause of burns was boiling water scalding (34.4%. Home related burn was significantly higher than other place (p<0.001. The most frequent percent of burn was <5% (46.25%. Finally 50 (31.25% cases hospitalized. Univariate analysis demonstrated that age under 20 years old (p=0.02 female gender (p=0.02, burning site (p=0.002, cause (p=0.005, place (p<0.001, grade (p<0.001, and percent (p<0.001 was related to disposition of patients. Stepwise multiple logistic regression showed female gender (OR=3.52; 95% CI: 1.57-7.88; p=0.002, work related burning (OR=1.78; 95% CI: 1.26-2.52; p=0.001, and burning over 5 percent (OR=2.15; 95% CI: 1.35-3.41; p=0.001 as
Shi, Shu Jing; Li, Hui; Liu, Meng; Liu, Ying Mei; Zhou, Fei; Liu, Bo; Qu, Jiu Xin; Cao, Bin
Community-acquired pneumonia (CAP) severity scores perform well in predicting mortality of CAP patients, but their applicability in influenza pneumonia is powerless. The aim of our research was to test the efficiency of PO 2 /FiO 2 and CAP severity scores in predicting mortality and intensive care unit (ICU) admission with influenza pneumonia patients. We reviewed all patients with positive influenza virus RNA detection in Beijing Chao-Yang Hospital during the 2009-2014 influenza seasons. Outpatients, inpatients with no pneumonia and incomplete data were excluded. We used receiver operating characteristic curves (ROCs) to verify the accuracy of severity scores or indices as mortality predictors in the study patients. Among 170 hospitalized patients with influenza pneumonia, 30 (17.6%) died. Among those who were classified as low-risk (predicted mortality 0.1%-2.1%) by pneumonia severity index (PSI) or confusion, urea, respiratory rate, blood pressure, age ≥65 year (CURB-65), the actual mortality ranged from 5.9 to 22.1%. Multivariate logistic regression indicated that hypoxia (PO 2 /FiO 2 ≤ 250) and lymphopenia (peripheral blood lymphocyte count pneumonia confirmed a similar pattern and PO 2 /FiO 2 combined lymphocyte count was also the best predictor for predicting ICU admission. In conclusion, we found that PO 2 /FiO 2 combined lymphocyte count is simple and reliable predictor of hospitalized patients with influenza pneumonia in predicting mortality and ICU admission. When PO 2 /FiO 2 ≤ 250 or peripheral blood lymphocyte count pneumonia. © 2015 The Authors. The Clinical Respiratory Journal published by John Wiley & Sons Ltd.
The Logistic Organ Dysfunction score (LOD) is an organ dysfunction score that can predict hospital mortality. The aim of this study was to validate the performance of the LOD score compared with the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in a mixed intensive care unit (ICU) at a tertiary referral university hospital in Thailand. The data were collected prospectively on consecutive ICU admissions over a 24 month period from July1, 2004 until June 30, 2006. Discrimination was evaluated by the area under the receiver operating characteristic curve (AUROC). The calibration was assessed by the Hosmer-Lemeshow goodness-of-fit H statistic. The overall fit of the model was evaluated by the Brier's score. Overall, 1,429 patients were enrolled during the study period. The mortality in the ICU was 20.9% and in the hospital was 27.9%. The median ICU and hospital lengths of stay were 3 and 18 days, respectively, for all patients. Both models showed excellent discrimination. The AUROC for the LOD and APACHE II were 0.860 [95% confidence interval (CI) = 0.838-0.882] and 0.898 (95% Cl = 0.879-0.917), respectively. The LOD score had perfect calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 10 (p = 0.44). However, the APACHE II had poor calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 75.69 (p < 0.001). Brier's score showed the overall fit for both models were 0.123 (95%Cl = 0.107-0.141) and 0.114 (0.098-0.132) for the LOD and APACHE II, respectively. Thus, the LOD score was found to be accurate for predicting hospital mortality for general critically ill patients in Thailand.
Hannan, Edward L; Farrell, Louise Szypulski; Walford, Gary; Jacobs, Alice K; Berger, Peter B; Holmes, David R; Stamato, Nicholas J; Sharma, Samin; King, Spencer B
This study sought to develop a percutaneous coronary intervention (PCI) risk score for in-hospital/30-day mortality. Risk scores are simplified linear scores that provide clinicians with quick estimates of patients' short-term mortality rates for informed consent and to determine the appropriate intervention. Earlier PCI risk scores were based on in-hospital mortality. However, for PCI, a substantial percentage of patients die within 30 days of the procedure after discharge. New York's Percutaneous Coronary Interventions Reporting System was used to develop an in-hospital/30-day logistic regression model for patients undergoing PCI in 2010, and this model was converted into a simple linear risk score that estimates mortality rates. The score was validated by applying it to 2009 New York PCI data. Subsequent analyses evaluated the ability of the score to predict complications and length of stay. A total of 54,223 patients were used to develop the risk score. There are 11 risk factors that make up the score, with risk factor scores ranging from 1 to 9, and the highest total score is 34. The score was validated based on patients undergoing PCI in the previous year, and accurately predicted mortality for all patients as well as patients who recently suffered a myocardial infarction (MI). The PCI risk score developed here enables clinicians to estimate in-hospital/30-day mortality very quickly and quite accurately. It accurately predicts mortality for patients undergoing PCI in the previous year and for MI patients, and is also moderately related to perioperative complications and length of stay. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Full Text Available Somsak Tiamkao,1,2 Sineenard Pranboon,3 Kaewjai Thepsuthammarat,4 Kittisak Sawanyawisuth1,5,6 1Department of Medicine, Faculty of Medicine, 2The Neuroscience Research and Development Group, 3Nursing Division, Srinagarind Hospital, 4Clinical Epidemiology Unit, Faculty of Medicine, 5Research Center in Back, Neck, Other Joint Pain and Human Performance (BNOJPH, 6Ambulatory Medicine Research Group, Khon Kaen University, Khon Kaen, Thailand Background: Myasthenia gravis (MG in elderly populations is increasing. This study aimed to evaluate predictors for treatment outcomes in elderly hospitalized MG patients using the national database. Methods: We collected data of elderly hospitalized MG patients from the National Health Security Office from October 2009 to September 2010. Predictors for treatment outcomes were examined. Results: During the study period, 1,948 identified MG patients were admitted to hospitals throughout Thailand. Of those, 441 patients (22.64% were aged ≥ 60 years. There were 66 patients (14.97% who had poor outcomes. There were only three significant factors in the final model. Presence of pneumonia, use of mechanical ventilators, and septicemia had adjusted odds ratios (95% confidence interval of 2.83 (1.03, 7.75, 5.33 (2.24, 12.72, and 4.47 (1.86, 10.75, respectively. Conclusion: Pneumonia, being on a mechanical ventilator, and septicemia were independent factors associated with poor treatment outcomes in elderly hospitalized MG patients according to national data. Keywords: pneumonia, ventilator, mortality, predictor
Ramiarina, Robert; Almeida, Renan Mvr; Pereira, Wagner Ca
The present work analyzed the association between hospital costs and patient admission characteristics in a general public hospital in the city of Rio de Janeiro, Brazil. The unit costs method was used to estimate inpatient day costs associated to specific hospital clinics. With this aim, three "cost centers" were defined in order to group direct and indirect expenses pertaining to the clinics. After the costs were estimated, a standard linear regression model was developed for correlating cost units and their putative predictors (the patients gender and age, the admission type (urgency/elective), ICU admission (yes/no), blood transfusion (yes/no), the admission outcome (death/no death), the complexity of the medical procedures performed, and a risk-adjustment index). Data were collected for 3100 patients, January 2001-January 2003. Average inpatient costs across clinics ranged from (US$) 1135 [Orthopedics] to 3101 [Cardiology]. Costs increased according to increases in the risk-adjustment index in all clinics, and the index was statistically significant in all clinics except Urology, General surgery, and Clinical medicine. The occupation rate was inversely correlated to costs, and age had no association with costs. The (adjusted) per cent of explained variance varied between 36.3% [Clinical medicine] and 55.1% [Thoracic surgery clinic]. The estimates are an important step towards the standardization of hospital costs calculation, especially for countries that lack formal hospital accounting systems.
Full Text Available BACKGROUND: Since most published articles comparing the performance of artificial neural network (ANN models and logistic regression (LR models for predicting hepatocellular carcinoma (HCC outcomes used only a single dataset, the essential issue of internal validity (reproducibility of the models has not been addressed. The study purposes to validate the use of ANN model for predicting in-hospital mortality in HCC surgery patients in Taiwan and to compare the predictive accuracy of ANN with that of LR model. METHODOLOGY/PRINCIPAL FINDINGS: Patients who underwent a HCC surgery during the period from 1998 to 2009 were included in the study. This study retrospectively compared 1,000 pairs of LR and ANN models based on initial clinical data for 22,926 HCC surgery patients. For each pair of ANN and LR models, the area under the receiver operating characteristic (AUROC curves, Hosmer-Lemeshow (H-L statistics and accuracy rate were calculated and compared using paired T-tests. A global sensitivity analysis was also performed to assess the relative significance of input parameters in the system model and the relative importance of variables. Compared to the LR models, the ANN models had a better accuracy rate in 97.28% of cases, a better H-L statistic in 41.18% of cases, and a better AUROC curve in 84.67% of cases. Surgeon volume was the most influential (sensitive parameter affecting in-hospital mortality followed by age and lengths of stay. CONCLUSIONS/SIGNIFICANCE: In comparison with the conventional LR model, the ANN model in the study was more accurate in predicting in-hospital mortality and had higher overall performance indices. Further studies of this model may consider the effect of a more detailed database that includes complications and clinical examination findings as well as more detailed outcome data.
Takahashi, Paul Y; Heien, Herbert C; Sangaralingham, Lindsey R; Shah, Nilay D; Naessens, James M
With the advent of healthcare payment reform, identifying high-risk populations has become more important to providers. Existing risk-prediction models often focus on chronic conditions. This study sought to better understand other factors to improve identification of the highest risk population. A retrospective cohort study of a paneled primary care population utilizing 2010 data to calibrate a risk prediction model of hospital and emergency department (ED) use in 2011. Data were randomly split into development and validation data sets. We compared the enhanced model containing the additional risk predictors with the Minnesota medical tiering model. The study was conducted in the primary care practice of an integrated delivery system at an academic medical center in Rochester, Minnesota. The study focus was primary care medical home patients in 2010 and 2011 (n = 84,752), with the primary outcome of subsequent hospitalization or ED visit. A total of 42,384 individuals derived the enhanced risk-prediction model and 42,368 individuals validated the model. Predictors included Adjusted Clinical Groups-based Minnesota medical tiering, patient demographics, insurance status, and prior year healthcare utilization. Additional variables included specific mental and medical conditions, use of high-risk medications, and body mass index. The area under the curve in the enhanced model was 0.705 (95% CI, 0.698-0.712) compared with 0.662 (95% CI, 0.656-0.669) in the Minnesota medical tiering-only model. New high-risk patients in the enhanced model were more likely to have lack of health insurance, presence of Medicaid, diagnosed depression, and prior ED utilization. An enhanced model including additional healthcare-related factors improved the prediction of risk of hospitalization or ED visit.
Lee, Kyun Jick
Data mining (DM) models are an alternative to traditional statistical methods for examining whether higher customer satisfaction leads to higher revisit intention. This study used a total of 906 outpatients' satisfaction data collected from a nationwide survey interviews conducted by professional interviewers on a face-to-face basis in South Korea, 1998. Analyses showed that the relationship between overall satisfaction with hospital services and outpatients' revisit intention, along with word-of-mouth recommendation as intermediate variables, developed into a nonlinear relationship. The five strongest predictors of revisit intention were overall satisfaction, intention to recommend to others, awareness of hospital promotion, satisfaction with physician's kindness, and satisfaction with treatment level.
Verbakel, Jan Y; MacFaul, Roderick; Aertgeerts, Bert; Buntinx, Frank; Thompson, Matthew
Feverish illness is a common presentation to acute pediatric services. Clinical staff faces the challenge of differentiating the few children with meningitis or sepsis from the majority with self-limiting illness. We aimed to determine the diagnostic value of clinical features and their prediction rules (CPR) for identifying children with sepsis or meningitis among those children admitted to a District General Hospital with acute febrile illness. Acutely ill children admitted to a District General Hospital in England were included in this case-control study between 2000 and 2005. We examined the diagnostic accuracy of individual clinical signs and 6 CPRs, including the National Institute for Clinical Excellence "traffic light" system, to determine clinical utility in identifying children with a diagnosis of sepsis or meningitis. Loss of consciousness, prolonged capillary refill, decreased alertness, respiratory effort, and the physician's illness assessment had high positive likelihood ratios (9-114), although with wide confidence intervals, to rule in sepsis or meningitis. The National Institute for Clinical Excellence traffic light system, the modified Yale Observation Scale, and the Pediatric Advanced Warning Score performed poorly with positive likelihood ratios ranging from 1 to 3. The pediatrician's overall illness assessment was the most useful feature to rule in sepsis or meningitis in these hospitalized children. Clinical prediction rules did not effectively rule in sepsis or meningitis. The modified Yale Observation Scale should be used with caution. Single clinical signs could complement these scores to rule in sepsis or meningitis. Further research is needed to validate these CPRs.
Stalenhoef, Janneke E; van der Starre, Willize E; Vollaard, Albert M; Steyerberg, Ewout W; Delfos, Nathalie M; Leyten, Eliane M S; Koster, Ted; Ablij, Hans C; Van't Wout, Jan W; van Dissel, Jaap T; van Nieuwkoop, Cees
There is a lack of severity assessment tools to identify adults presenting with febrile urinary tract infection (FUTI) at risk for complicated outcome and guide admission policy. We aimed to validate the Prediction Rule for Admission policy in Complicated urinary Tract InfeCtion LEiden (PRACTICE), a modified form of the pneumonia severity index, and to subsequentially assess its use in clinical practice. A prospective observational multicenter study for model validation (2004-2009), followed by a multicenter controlled clinical trial with stepped wedge cluster-randomization for impact assessment (2010-2014), with a follow up of 3 months. Paricipants were 1157 consecutive patients with a presumptive diagnosis of acute febrile UTI (787 in validation cohort and 370 in the randomized trial), enrolled at emergency departments of 7 hospitals and 35 primary care centers in the Netherlands. The clinical prediction rule contained 12 predictors of complicated course. In the randomized trial the PRACTICE included guidance on hospitalization for high risk (>100 points) and home discharge for low risk patients (urinary tract infection, futher improvement is necessary to reduce the occurrence of secondary hospital admissions. NTR4480 http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=4480 , registered retrospectively 25 mrt 2014 (during enrollment of subjects).
Groh, K.J.; Carvalho, R.N.; Chipman, J.K.; Denslow, N.D.; Halder, M.; Murphy, C.A.; Roelofs, D.; Rolaki, A.; Schirmer, K.; Watanabe, K.H.
Adverse outcome pathways (AOPs) organize knowledge on the progression of toxicity through levels of biological organization. By determining the linkages between toxicity events at different levels, AOPs lay the foundation for mechanism-based alternative testing approaches to hazard assessment. Here,
Nilsen, W.; Skipstein, A.; Demerouti, E.
Background The long-term consequence of experiencing mental health problems may lead to several adverse outcomes. The current study aims to validate previous identified trajectories of mental health problems from 1993 to 2006 in women by examining their implications on subsequent work and
Prognostic value of combined CT angiography and myocardial perfusion imaging versus invasive coronary angiography and nuclear stress perfusion imaging in the prediction of major adverse cardiovascular events
Chen, Marcus Y.; Rochitte, Carlos E.; Arbab-Zadeh, Armin
Purpose: To compare the prognostic importance (time to major adverse cardiovascular event [MACE]) of combined computed tomography (CT) angiography and CT myocardial stress perfusion imaging with that of combined invasive coronary angiography (ICA) and stress single photon emission CT myocardial p...
Sakhnini, Ali; Saliba, Walid; Schwartz, Naama; Bisharat, Naiel
Limited information is available about clinical predictors of in-hospital mortality in acute unselected medical admissions. Such information could assist medical decision-making.To develop a clinical model for predicting in-hospital mortality in unselected acute medical admissions and to test the impact of secondary conditions on hospital mortality.This is an analysis of the medical records of patients admitted to internal medicine wards at one university-affiliated hospital. Data obtained from the years 2013 to 2014 were used as a derivation dataset for creating a prediction model, while data from 2015 was used as a validation dataset to test the performance of the model. For each admission, a set of clinical and epidemiological variables was obtained. The main diagnosis at hospitalization was recorded, and all additional or secondary conditions that coexisted at hospital admission or that developed during hospital stay were considered secondary conditions.The derivation and validation datasets included 7268 and 7843 patients, respectively. The in-hospital mortality rate averaged 7.2%. The following variables entered the final model; age, body mass index, mean arterial pressure on admission, prior admission within 3 months, background morbidity of heart failure and active malignancy, and chronic use of statins and antiplatelet agents. The c-statistic (ROC-AUC) of the prediction model was 80.5% without adjustment for main or secondary conditions, 84.5%, with adjustment for the main diagnosis, and 89.5% with adjustment for the main diagnosis and secondary conditions. The accuracy of the predictive model reached 81% on the validation dataset.A prediction model based on clinical data with adjustment for secondary conditions exhibited a high degree of prediction accuracy. We provide a proof of concept that there is an added value for incorporating secondary conditions while predicting probabilities of in-hospital mortality. Further improvement of the model performance
Middleton, Renee Annette
Older persons (55 years and older) with cardiovascular disease are at increased risk for hospital readmission when compared to other subgroups of our population. This issue presents an economic problem, a concern for the quality and type of care provided, and an urgent need to implement innovative strategies designed to reduce the rising cost of…
Ong, K. H.; Tan, H. L.; Lai, H. C.; Kuperan, P.
INTRODUCTION: Iron parameters like serum ferritin and iron saturation are routinely used in diagnosing iron deficiency. However, these tests are influenced by many factors. We aimed to review the accuracy of iron parameters among inpatients in an acute care hospital. MATERIALS AND METHODS: From
van den Bosch, W.F.; Kelder, J.C.; Wagner, C.
Background: Casemix adjusted in-hospital mortality is one of the measures used to improve quality of care. The adjustment currently used does not take into account the effects of readmission, because reliable data on readmission is not readily available through routinely collected databases. We have
Bosch, W.F. van den; Kelder, J.C.; Wagner, C.
BACKGROUND: Casemix adjusted in-hospital mortality is one of the measures used to improve quality of care. The adjustment currently used does not take into account the effects of readmission, because reliable data on readmission is not readily available through routinely collected databases. We have
Reissigová, Jindra; Monhart, Z.; Zvárová, Jana; Hanzlíček, Petr; Grünfeldová, H.; Janský, P.; Vojáček, J.; Widimský, P.
Roč. 9, č. 1 (2013), s. 11-17 ISSN 1801-5603 Institutional support: RVO:67985807 Keywords : multilevel logistic regression * acute coronary syndromes * risk factors * in-hospital death Subject RIV: IN - Informatics, Computer Science http://www.ejbi.org/img/ejbi/2013/1/Reissigova_en.pdf
Non-O blood groups can be a prognostic marker of in-hospital and long-term major adverse cardiovascular events in patients with ST elevation myocardial infarction undergoing primary percutaneous coronary intervention.
Cetin, Mehmet Serkan; Ozcan Cetin, Elif Hande; Aras, Dursun; Topaloglu, Serkan; Temizhan, Ahmet; Kisacik, Halil Lutfi; Aydogdu, Sinan
Recent studies have suggested ABO blood type locus as an inherited predictor of thrombosis, cardiovascular risk factors, myocardial infarction. However, data is scarce about the impact of non-O blood groups on prognosis in patients with ST-elevation myocardial infarction (STEMI). Therefore, we aimed to evaluate the prognostic importance of non-O blood groups in patients with STEMI undergoing primary percutaneous coronary intervention (pPCI) METHODS: 1835 consecutive patients who were admitted with acute STEMI between 2010 and 2015 were included and followed-up for a median of 35.6months. The prevalence of hyperlipidemia, total cholesterol, LDL, peak CKMB and no-reflow as well as hospitalization duration were higher in patients with non-O blood groups. Gensini score did not differ between groups. During the in-hospital and long-term follow-up period, MACE, the prevalence of stent thrombosis, non-fatal MI, and mortality were higher in non-O blood groups. In multivariate logistic regression analysis, non-0 blood groups were demonstrated to be independent predictors of in-hospital (OR:2.085 %CI: 1.328-3.274 p=0.001) and long term MACE (OR:2.257 %CI: 1.325-3.759 pblood group compared with O blood group (pblood groups were determined to be significant prognostic indicators of short- and long-term cardiovascular adverse events and mortality in patients with STEMI undergoing pPCI. In conjunction with other prognostic factors, evaluation of this parameter may improve the risk categorization and tailoring the individual therapy and follow-up in STEMI patient population. Copyright © 2015 Elsevier Ltd. All rights reserved.
A time series analysis of the effects of financial incentives and mandatory clinical applications as interventions to improve spontaneous adverse drug reaction reporting by hospital medical staff in China.
Chang, Feng; Xi, Yue; Zhao, Jie; Zhang, Xiaojian; Lu, Yun
Spontaneous reporting of adverse drug reactions (ADRs) in hospitals is often under-reported, which may lead to problems in patient management. This study was aimed to assess the effectiveness of a financial intervention based on a fine and a bonus for improving spontaneous reporting of ADRs by physicians in a hospital setting. This study was conducted at the First Affiliated Hospital of Zhengzhou University (China). Starting in 2009, a bonus of 20 RMB (Chinese currency) was given for each spontaneous ADR report, and a fine of 50 RMB was given for any withheld ADR report. A time series analysis using autoregressive integrated moving average models was performed to assess the changes in the total number of spontaneous ADR reports between the preintervention period (2006-2008) and during the first (2009-2011) and second (2012-2014) intervention periods. The median number of reported ADRs per year increased from 29 (range 27-72) in the preintervention period to 277 (range 199-284) in the first intervention period and to 666 in the second (range 644-691). The monthly number of reported ADRs was stable during the 3 periods: 3.56 ± 3.60/month (95% confidence interval (CI), 2.42-4.75) during the preintervention period, 21 ± 13/month (95% CI, 16.97-25.80) in the first intervention period, and 56 ± 20/month (95% CI, 48.81-62.17) in the second intervention period. A financial incentive and ADR management regulations had a significant effect on the increase of reported ADRs. © 2017 John Wiley & Sons, Ltd.
Ismail, Abdussalaam Iyanda; Abdul Majid, Abdul Halim; Zakaria, Mohd Normani; Abdullah, Nor Azimah Chew; Hamzah, Sulaiman; Mukari, Siti Zamratol-Mai Sarah
The current study aims to examine the effects of human resource (measured with the perception of health workers' perception towards UNHS), screening equipment, program layout and screening techniques on healthcare practitioners' awareness (measured with knowledge) of universal newborn hearing screening (UNHS) in Malaysian non-public hospitals. Via cross sectional approach, the current study collected data using a validated questionnaire to obtain information on the awareness of UNHS program among the health practitioners and to test the formulated hypotheses. 51, representing 81% response rate, out of 63 questionnaires distributed to the health professionals were returned and usable for statistical analysis. The survey instruments involving healthcare practitioners' awareness, human resource, program layout, screening instrument, and screening techniques instruments were adapted and scaled with 7-point Likert scale ranging from 1 (little) to 7 (many). Partial Least Squares (PLS) algorithm and bootstrapping techniques were employed to test the hypotheses of the study. With the result involving beta values, t-values and p-values (i.e. β=0.478, t=1.904, phealth practitioners. Likewise, program layout, human resource, screening technique and screening instrument explain 71% variance in health practitioners' awareness. Health practitioners' awareness is explained by program layout, human resource, and screening instrument with effect size (f2) of 0.065, 0.621, and 0.211 respectively, indicating that program layout, human resource, and screening instrument have small, large and medium effect size on health practitioners' awareness respectively. However, screening technique has zero effect on health practitioners' awareness, indicating the reason why T-statistics is not significant. Having started the UNHS program in 2003, non-public hospitals have more experienced and well-trained employees dealing with the screening tools and instrument, and the program layout is well
Korsten, Koos; Blanken, Maarten O; Nibbelke, Elisabeth E; Moons, Karel G M; Bont, Louis
BACKGROUND: New vaccines and RSV therapeutics have been developed in the past decade. With approval of these new pharmaceuticals on the horizon, new challenges lie ahead in selecting the appropriate target population. We aimed to improve a previously published prediction model for prediction of
Korsten, K.; Blanken, M.O.; Nibbelke, E.E.; Moons, K.G.; Bont, L.; Liem, K.D.; et al.,
BACKGROUND: New vaccines and RSV therapeutics have been developed in the past decade. With approval of these new pharmaceuticals on the horizon, new challenges lie ahead in selecting the appropriate target population. We aimed to improve a previously published prediction model for prediction of
Ross, Thomas; Querengässer, Jan; Fontao, María Isabel; Hoffmann, Klaus
In Germany, both the number of patients treated in forensic psychiatric hospitals and the average inpatient treatment period have been increasing for over thirty years. Biographical and clinical factors, e.g., the number of prior offences, type of offence, and psychiatric diagnosis, count among the factors that influence the treatment duration and the likelihood of discharge. The aims of the current study were threefold: (1) to provide an estimate of the German forensic psychiatric patient population with a low likelihood of discharge, (2) to replicate a set of personal variables that predict a relatively high, as opposed to a low, likelihood of discharge from forensic psychiatric hospitals, and (3) to describe a group of other factors that are likely to add to the existing body of knowledge. Based on a sample of 899 patients, we applied a battery of primarily biographical and other personal variables to two subgroups of patients. The first subgroup of patients had been treated in a forensic psychiatric hospital according to section 63 of the German legal code for at least ten years (long-stay patients, n=137), whereas the second subgroup had been released after a maximum treatment period of four years (short-stay patients, n=67). The resulting logistic regression model had a high goodness of fit, with more than 85% of the patients correctly classified into the groups. In accordance with earlier studies, we found a series of personal variables, including age at first admission and type of offence, to be predictive of a short or long-stay. Other findings, such as the high number of immigrants among the short-stay patients and the significance of a patient's work time before admission to a forensic psychiatric hospital, are more clearly represented than has been observed in previous research. Copyright © 2012 Elsevier Ltd. All rights reserved.
de Man-van Ginkel, Janneke M; Hafsteinsdóttir, Thóra B; Lindeman, Eline; Ettema, Roelof G A; Grobbee, Diederick E; Schuurmans, Marieke J
The timely detection of post-stroke depression is complicated by a decreasing length of hospital stay. Therefore, the Post-stroke Depression Prediction Scale was developed and validated. The Post-stroke Depression Prediction Scale is a clinical prediction model for the early identification of stroke patients at increased risk for post-stroke depression. The study included 410 consecutive stroke patients who were able to communicate adequately. Predictors were collected within the first week after stroke. Between 6 to 8 weeks after stroke, major depressive disorder was diagnosed using the Composite International Diagnostic Interview. Multivariable logistic regression models were fitted. A bootstrap-backward selection process resulted in a reduced model. Performance of the model was expressed by discrimination, calibration, and accuracy. The model included a medical history of depression or other psychiatric disorders, hypertension, angina pectoris, and the Barthel Index item dressing. The model had acceptable discrimination, based on an area under the receiver operating characteristic curve of 0.78 (0.72-0.85), and calibration (P value of the U-statistic, 0.96). Transforming the model to an easy-to-use risk-assessment table, the lowest risk category (sum score, depression, which increased to 82% in the highest category (sum score, >21). The clinical prediction model enables clinicians to estimate the degree of the depression risk for an individual patient within the first week after stroke.
Mathioudakis, Nestoras Nicolas; Everett, Estelle; Routh, Shuvodra; Pronovost, Peter J; Yeh, Hsin-Chieh; Golden, Sherita Hill; Saria, Suchi
To develop and validate a multivariable prediction model for insulin-associated hypoglycemia in non-critically ill hospitalized adults. We collected pharmacologic, demographic, laboratory, and diagnostic data from 128 657 inpatient days in which at least 1 unit of subcutaneous insulin was administered in the absence of intravenous insulin, total parenteral nutrition, or insulin pump use (index days). These data were used to develop multivariable prediction models for biochemical and clinically significant hypoglycemia (blood glucose (BG) of ≤70 mg/dL and model development and validation, respectively. Using predictors of age, weight, admitting service, insulin doses, mean BG, nadir BG, BG coefficient of variation (CV BG ), diet status, type 1 diabetes, type 2 diabetes, acute kidney injury, chronic kidney disease (CKD), liver disease, and digestive disease, our model achieved a c-statistic of 0.77 (95% CI 0.75 to 0.78), positive likelihood ratio (+LR) of 3.5 (95% CI 3.4 to 3.6) and negative likelihood ratio (-LR) of 0.32 (95% CI 0.30 to 0.35) for prediction of biochemical hypoglycemia. Using predictors of sex, weight, insulin doses, mean BG, nadir BG, CV BG , diet status, type 1 diabetes, type 2 diabetes, CKD stage, and steroid use, our model achieved a c-statistic of 0.80 (95% CI 0.78 to 0.82), +LR of 3.8 (95% CI 3.7 to 4.0) and -LR of 0.2 (95% CI 0.2 to 0.3) for prediction of clinically significant hypoglycemia. Hospitalized patients at risk of insulin-associated hypoglycemia can be identified using validated prediction models, which may support the development of real-time preventive interventions.
Full Text Available Abstract Background Percutaneous endoscopic gastrostomy (PEG is an established procedure for long-term nutrition. However, studies have underlined the importance of proper patient selection as mortality has been shown to be relatively high in acute illness and certain patient groups, amongst others geriatric patients. Objective of the study was to gather information about geriatric patients receiving PEG and to identify risk factors associated with in-hospital mortality after PEG placement. Methods All patients from the GEMIDAS database undergoing percutaneous endoscopic gastrostomy in acute geriatric wards from 2006 to 2010 were included in a retrospective database analysis. Data on age, gender, main diagnosis leading to hospital admission, death in hospital, care level, and legal incapacitation were extracted from the main database of the Geriatric Minimum Data Set. Self-care capacity was assessed by the Barthel index, and cognitive status was rated with the Mini Mental State Examination or subjectively judged by the clinician. Descriptive statistics and group comparisons were chosen according to data distribution and scale of measurement, logistic regression analysis was performed to examine influence of various factors on hospital mortality. Results A total of 1232 patients (60.4% women with a median age of 82 years (range 60 to 99 years were included. The mean Barthel index at admission was 9.5 ± 14.0 points. Assessment of cognitive status was available in about half of the patients (n = 664, with 20% being mildly impaired and almost 70% being moderately to severely impaired. Stroke was the most common main diagnosis (55.2%. In-hospital mortality was 12.8%. In a logistic regression analysis, old age (odds ratio (OR 1.030, 95% confidence interval (CI 1.003-1.056, male sex (OR 1.741, 95% CI 1.216-2.493, and pneumonia (OR 2.641, 95% CI 1.457-4.792 or the diagnosis group ‘miscellaneous disease’ (OR 1.864, 95% CI 1
Pérez-Topete, S E; Miranda-Aquino, T; Hernández-Portales, J A
Clostridium difficile (C. difficile) is a Gram-positive bacillus that is a common cause of diarrhea in the hospital environment, with a documented incidence of up to 10%. There are different methods to detect it, but a widely used test in our environment is the immunoassay for toxins A and B. The aim of our study was to 1) estimate the positive predictive value of the immunoassay for the detection of the C. difficile toxins A and B, 2) to establish the incidence of C. difficile-associated diarrhea in the hospital, and 3) to know the most common associated factors. A diagnostic test accuracy study was conducted within the time frame of January 2010 to August 2013 at the Hospital Christus Muguerza® Alta Especialidad on patients with symptoms suggestive of C. difficile-associated diarrhea that had a positive immunoassay test and confirmation of C. difficile through colon biopsy and stool culture. The immunoassay for toxins A and B was performed in 360 patients. Fifty-five of the cases had positive results, 35 of which showed the presence of C. difficile. Incidence was 10.2% and the positive predictive value of the test for C. difficile toxins A and B was 0.64 (95% CI, 0.51-0.76). Previous antibiotic therapy (n=29) and proton pump inhibitor use (n=19) were the most common associated factors. C. difficile incidence in our environment is similar to that found in the literature reviewed, but the positive predictive value of the test for toxin A and B detection was low. Copyright © 2016 Asociación Mexicana de Gastroenterología. Publicado por Masson Doyma México S.A. All rights reserved.
Lee, Christine K; Hofer, Ira; Gabel, Eilon; Baldi, Pierre; Cannesson, Maxime
The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality. The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index. In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99). Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.
Katharina Isabel von Auenmueller
Full Text Available Context: Sudden cardiac death is one of the leading causes of death in Europe, and early prognostication remains challenging. There is a lack of valid parameters for the prediction of survival after cardiac arrest. Aims: This study aims to investigate if arterial blood gas parameters correlate with mortality of patients after out-of-hospital cardiac arrest. Materials and Methods: All patients who were admitted to our hospital after resuscitation following out-of-hospital cardiac arrest between January 1, 2008, and December 31, 2013, were included in this retrospective study. The patient's survival 5 days after resuscitation defined the study end-point. For the statistical analysis, the mean, standard deviation, Student's t-test, Chi-square test, and logistic regression analyses were used (level of significance P< 0.05. Results: Arterial blood gas samples were taken from 170 patients. In particular, pH < 7.0 (odds ratio [OR]: 7.20; 95% confidence interval [CI]: 3.11–16.69; P< 0.001 and lactate ≥ 5.0 mmol/L (OR: 6.79; 95% CI: 2.77–16.66; P< 0.001 showed strong and independent correlations with mortality within the first 5 days after hospital admission. Conclusion: Our study results indicate that several arterial blood gas parameters correlate with mortality of patients after out-of-hospital resuscitation. The most relevant parameters are pH and lactate because they are strongly and independently associated with mortality within the first 5 days after resuscitation. Despite this correlation, none of these parameters by oneself is strong enough to allow an early prognostication. Still, these parameters can contribute as part of a multimodal approach to assessing the patients' prognosis.
Maali, Yashar; Perez-Concha, Oscar; Coiera, Enrico; Roffe, David; Day, Richard O; Gallego, Blanca
The identification of patients at high risk of unplanned readmission is an important component of discharge planning strategies aimed at preventing unwanted returns to hospital. The aim of this study was to investigate the factors associated with unplanned readmission in a Sydney hospital. We developed and compared validated readmission risk scores using routinely collected hospital data to predict 7-day, 30-day and 60-day all-cause unplanned readmission. A combination of gradient boosted tree algorithms for variable selection and logistic regression models was used to build and validate readmission risk scores using medical records from 62,235 live discharges from a metropolitan hospital in Sydney, Australia. The scores had good calibration and fair discriminative performance with c-statistic of 0.71 for 7-day and for 30-day readmission, and 0.74 for 60-day. Previous history of healthcare utilization, urgency of the index admission, old age, comorbidities related to cancer, psychosis, and drug-abuse, abnormal pathology results at discharge, and being unmarried and a public patient were found to be important predictors in all models. Unplanned readmissions beyond 7 days were more strongly associated with longer hospital stays and older patients with higher number of comorbidities and higher use of acute care in the past year. This study demonstrates similar predictors and performance to previous risk scores of 30-day unplanned readmission. Shorter-term readmissions may have different causal pathways than 30-day readmission, and may, therefore, require different screening tools and interventions. This study also re-iterates the need to include more informative data elements to ensure the appropriateness of these risk scores in clinical practice.
Murray, Andrea L; Scratch, Shannon E; Thompson, Deanne K; Inder, Terrie E; Doyle, Lex W; Anderson, Jacqueline F. I.; Anderson, Peter J
Objective This study aimed to examine attention and processing speed outcomes in very preterm (VPT; deep gray matter, and cerebellar abnormalities. Attention and processing speed were assessed at 7 years using standardized neuropsychological tests. Group differences were tested in attention and processing speed, and the relationships between these cognitive domains and brain abnormalities at birth were investigated. Results At 7 years of age, the VPT/VLBW group performed significantly poorer than term controls on all attention and processing speed outcomes. Associations between adverse attention and processing speed performances at 7 years and higher neonatal brain abnormality scores were found; in particular, white matter and deep gray matter abnormalities were reasonable predictors of long-term cognitive outcomes. Conclusion Attention and processing speed are significant areas of concern in VPT/VLBW children. This is the first study to show that adverse attention and processing speed outcomes at 7 years are associated with neonatal brain pathology. PMID:24708047
van Werkhoven, C. H.; van der Tempel, J.; Jajou, R.; Thijsen, S. F T; Diepersloot, R. J A; Bonten, M. J M; Postma, D. F.; Oosterheert, J. J.
To develop and validate a prediction model for Clostridium difficile infection (CDI) in hospitalized patients treated with systemic antibiotics, we performed a case-cohort study in a tertiary (derivation) and secondary care hospital (validation). Cases had a positive Clostridium test and were
Wynd, Christine A; Ryan-Wenger, Nancy A
This study identified health-risk and health-promoting behaviors in military and civilian personnel employed in hospitals. Intrinsic self-motivation and extrinsic organizational workplace factors were examined as predictors of health behaviors. Because reservists represent a blend of military and civilian lifestyles, descriptive analyses focused on comparing Army Reserve personnel (n = 199) with active duty Army (n = 218) and civilian employees (n = 193), for a total sample of 610. Self-motivation and social support were significant factors contributing to the adoption of health-promoting behaviors; however, organizational workplace cultures were inconsistent predictors of health among the three groups. Only the active Army subgroup identified a hierarchical culture as having an influence on health promotion, possibly because of the Army's mandatory physical fitness and weight control standards. Social support and self-motivation are essential to promoting health among employees, thus hospital commanders and chief executive officers should encourage strategies that enhance and reward these behaviors.
Carlé, Allan; Pedersen, Inge Bülow; Perrild, Hans
Background: Hospital-based studies may be hampered by referral bias. We investigated how the phenomenon may influence studies of hyperthyroid patients. Methods: By means of a computer-based linkage to the laboratory database and subsequent detailed evaluation of subjects with abnormal test results......, we prospectively identified all 1,148 patients diagnosed with overt hyperthyroidism in a four-year period in and around Aalborg City, Denmark. Each patient was classified according to nosological type of hyperthyroidism. We studied the referral pattern of patients to local hospital units......, and analyzed how referral depended on subtype of disease, sex, age, and degree of biochemical hyperthyroidism. Results: In a 4-year period, 1,032 hyperthyroid patients were diagnosed at primary care offices, and 435 of these (42.2%) were referred to specialized units, 92 patients had hyperthyroidism diagnosed...
Mautner, Dawn; Peterson, Bridget; Cunningham, Amy; Ku, Bon; Scott, Kevin; LaNoue, Marianna
Health locus of control may be an important predictor of health care utilization. We analyzed associations between health locus of control and frequency of emergency department visits and hospital admissions, and investigated self-rated health as a potential mediator. Overall, 863 patients in an urban emergency department completed the Multidimensional Health Locus of Control instrument, and self-reported emergency department use and hospital admissions in the last year. We found small but significant associations between Multidimensional Health Locus of Control and utilization, all of which were mediated by self-rated health. We conclude that interventions to shift health locus of control may change patients' perceptions of their own health, thereby impacting utilization.
Bengtson, May-Bente; Martin, Christopher F; Aamodt, Geir; Vatn, Morten H; Mahadevan, Uma
Malnutrition and weight loss are common features of patients with inflammatory bowel disease (IBD). To explore the impact of inadequate gestational weight gain (GWG) on adverse outcomes among IBD mothers in the prospective US pregnancy in Inflammatory Bowel Disease and Neonatal Outcomes (PIANO) cohort. The PIANO cohort comprises 559 and 363 pregnant mothers with Crohn's disease (CD) and ulcerative colitis (UC), respectively, enrolled between 2006 and 2014. The mothers were followed during and after pregnancy to ascertain medication, measurement of disease activity and complications during pregnancy and at delivery. Inadequate GWG was based on US Institute of Medicine recommendations. The associations between inadequate GWG and adverse pregnancy outcomes in maternal IBD were analyzed, adjusted for diabetes, hypertension, smoking, maternal age, education, and disease activity. Maternal CD and UC with inadequate GWG had a 2.5-fold increased risk of preterm birth (OR 2.5, CI 1.3, 4.9 and OR 2.5, CI 1.2, 5.6). Furthermore, an increased risk of intrauterine growth restriction and a trend for small for gestational age were demonstrated in CD but not in UC (OR 3.3, CI 1.1, 10.0, OR 4.5, CI 0.8, 24.3, p = 0.08). Flares increased risk of inadequate GWG (OR 1.6, CI 1.2, 2.3, p = 0.002) but did not change the associations between inadequate GWG and adverse pregnancy outcomes in our models. The US PIANO cohort demonstrated that inadequate GWG was a strong independent predictor of adverse pregnancy outcomes in IBD mothers.
Wolak, Talya; Shoham-Vardi, Ilana; Sergienko, Ruslan; Sheiner, Eyal
This study aims to examine whether renal function during pregnancy can serve as a surrogate marker for the risk of developing atherosclerotic-related morbidity. A case-control study, including women who gave birth at a tertiary referral medical centre during 2000-2012. This population was divided into cases of women who were subsequently hospitalized for atherosclerotic morbidity during the study period and age-matched controls. From the study population, we retrieved two groups: the creatinine (Cr) group: women who had at least one Cr measurement (4945 women) and the urea group: women who had at least one urea measurement (4932 women) during their pregnancies. In the Cr and urea group, there were 572 and 571 cases and 4373 and 4361 controls, respectively. The mean follow-up period in the Cr and urea group was 61.7 ± 37.0 and 57.3 ± 36.0 months, respectively. Cox proportional hazards models (controlling for confounders: gestational hypertension, gestational diabetes, obesity, maternal age, creatinine level (for urea), and gestational week) were used to estimate the adjusted hazard ratios (HR) for hospitalizations. A significant association was documented between renal function during pregnancy and long-term atherosclerotic morbidity. Multivariate analysis, showed that Cr at pregnancy index of ≥89 μmol/L was associated with a significant increased risk for hospitalization due to cardiovascular (CVS) events (adjusted HR = 2.91 CI 1.37-6.19 P = 0.005) and urea level ≤7 mmol/L was independently associated with reduced prevalence of CVS hospitalization (adjusted HR = 0.62 CI 0.57-0.86 P = 0.001). Renal function abnormality during pregnancy may reveal occult predisposition to atherosclerotic morbidity years after childbirth. © 2015 Asian Pacific Society of Nephrology.
Vaziri, Kamyar; Pershing, Suzann; Albini, Thomas A; Moshfeghi, Darius M; Moshfeghi, Andrew A
To identify potential risk factors associated with endogenous endophthalmitis among hospitalized patients with hematogenous infections. Retrospective cross-sectional study. MarketScan Commercial Claims and Encounters, and Medicare Supplemental and Coordination of Benefit inpatient databases from the years 2007-2011 were obtained. Utilizing ICD-9 codes, logistic regression was used to identify potential predictors/comorbidities for developing endophthalmitis in patients with hematogenous infections. Among inpatients with hematogenous infections, the overall incidence rate of presumed endogenous endophthalmitis was 0.05%-0.4% among patients with fungemia and 0.04% among patients with bacteremia. Comorbid human immunodeficiency virus infection/acquired immunodeficiency syndrome (HIV/AIDS) (OR = 4.27; CI, 1.55-11.8; P = .005), tuberculosis (OR = 8.5; CI, 1.2-61.5; P = .03), endocarditis (OR = 8.3; CI, 4.9-13.9; P endogenous endophthalmitis. Patients aged 0-17 years (OR = 2.61; CI, 1.2-5.7; P = .02), 45-54 years (OR = 3.4; CI, 2.0-5.4; P endogenous endophthalmitis. Endogenous endophthalmitis is rare among hospitalized patients in the United States. Among patients with hematogenous infections, odds of endogenous endophthalmitis were higher for children and middle-aged patients, and for patients with endocarditis, bacterial meningitis, lymphoma/leukemia, HIV/AIDS, internal organ abscess, diabetes with ophthalmic manifestations, skin cellulitis/abscess, pyogenic arthritis, tuberculosis, longer hospital stays, and/or ICU/NICU admission. Published by Elsevier Inc.
Mard, Shan; Nielsen, Finn Erland
To evaluate the positive predictive value (PPV) of a diagnosis of heart failure (HF) in the Danish National Registry of Patients (NRP) among patients admitted to a University Hospital cardiac care unit, and to evaluate the impact of misdiagnosing HF.......To evaluate the positive predictive value (PPV) of a diagnosis of heart failure (HF) in the Danish National Registry of Patients (NRP) among patients admitted to a University Hospital cardiac care unit, and to evaluate the impact of misdiagnosing HF....
Stepped-wedge cluster randomised controlled trial to assess the effectiveness of an electronic medication management system to reduce medication errors, adverse drug events and average length of stay at two paediatric hospitals: a study protocol.
Westbrook, J I; Li, L; Raban, M Z; Baysari, M T; Mumford, V; Prgomet, M; Georgiou, A; Kim, T; Lake, R; McCullagh, C; Dalla-Pozza, L; Karnon, J; O'Brien, T A; Ambler, G; Day, R; Cowell, C T; Gazarian, M; Worthington, R; Lehmann, C U; White, L; Barbaric, D; Gardo, A; Kelly, M; Kennedy, P
Medication errors are the most frequent cause of preventable harm in hospitals. Medication management in paediatric patients is particularly complex and consequently potential for harms are greater than in adults. Electronic medication management (eMM) systems are heralded as a highly effective intervention to reduce adverse drug events (ADEs), yet internationally evidence of their effectiveness in paediatric populations is limited. This study will assess the effectiveness of an eMM system to reduce medication errors, ADEs and length of stay (LOS). The study will also investigate system impact on clinical work processes. A stepped-wedge cluster randomised controlled trial (SWCRCT) will measure changes pre-eMM and post-eMM system implementation in prescribing and medication administration error (MAE) rates, potential and actual ADEs, and average LOS. In stage 1, 8 wards within the first paediatric hospital will be randomised to receive the eMM system 1 week apart. In stage 2, the second paediatric hospital will randomise implementation of a modified eMM and outcomes will be assessed. Prescribing errors will be identified through record reviews, and MAEs through direct observation of nurses and record reviews. Actual and potential severity will be assigned. Outcomes will be assessed at the patient-level using mixed models, taking into account correlation of admissions within wards and multiple admissions for the same patient, with adjustment for potential confounders. Interviews and direct observation of clinicians will investigate the effects of the system on workflow. Data from site 1 will be used to develop improvements in the eMM and implemented at site 2, where the SWCRCT design will be repeated (stage 2). The research has been approved by the Human Research Ethics Committee of the Sydney Children's Hospitals Network and Macquarie University. Results will be reported through academic journals and seminar and conference presentations. Australian New Zealand
Cullaro, Giuseppe; Kim, Grace; Pereira, Marcus R; Brown, Robert S; Verna, Elizabeth C
Neutrophil gelatinase-associated lipocalin (NGAL) is a marker of both tissue injury and infection. Urine NGAL levels strongly predict acute kidney injury and mortality in patients with cirrhosis, but ascites NGAL is not well characterized. We hypothesized that ascites NGAL level is a marker of spontaneous bacterial peritonitis (SBP) and mortality risk in patients with cirrhosis. Hospitalized patients with cirrhosis and ascites undergoing diagnostic paracentesis were prospectively enrolled and followed until death or discharge. Patients with secondary peritonitis, prior transplantation, or active colitis were excluded. NGAL was measured in the ascites and serum. Ascites NGAL level was evaluated as a marker of SBP (defined as ascites absolute neutrophil count > 250 cells/mm 3 ) and predictor of in-patient mortality. A total of 146 patients were enrolled, and of these, 29 patients (20%) had SBP. Baseline characteristics were similar between subjects with and without SBP. Median (IQR) ascites NGAL was significantly higher in patients with SBP compared to those without SBP (221.3 [145.9-392.9] vs. 139.2 [73.9-237.2], p peritonitis in hospitalized patient with cirrhosis and an independent predictor of short-term in-hospital mortality, even controlling for SBP and MELD.
Mendes, J; Alves, P; Amaral, T F
Undernutrition has been associated with an increased length of hospital stay which may reflect the patient prognosis. The aim of this study was to quantify and compare the association between nutritional status and handgrip strength at hospital admission with time to discharge in cancer patients. An observational prospective study was conducted in an oncology center. Patient-Generated Subjective Global Assessment, Nutritional Risk Screening 2002 and handgrip strength were conducted in a probabilistic sample of 130 cancer patients. The association between baseline nutritional status, handgrip strength and time to discharge was evaluated using survival analysis with discharge alive as the outcome. Nutritional risk ranged from 42.3 to 53.1% depending on the tool used. According to Patient-Generated Subjective Global Assessment severe undernutrition was present in 22.3% of the sample. The association between baseline data and time to discharge was stronger in patients with low handgrip strength (adjusted hazard ratio, low handgrip strength: 0.33; 95% confidence interval: 0.19-0.55), compared to undernourished patients evaluated by the other tools; Patient-Generated Subjective Global Assessment: (adjusted hazard ratio, severe undernutrition: 0.45; 95% confidence interval: 0.27-0.75) and Nutritional Risk Screening 2002: (adjusted hazard ratio, with nutritional risk: 0.55; 95% confidence interval: 0.37-0.80). An approximate 3-fold decrease in probability of discharge alive was observed in patients with low handgrip strength. Decreasing handgrip strength tertiles allowed to discriminate between patients who will have longer hospital stay, as well as undernutrition and nutritional risk assessed by Patient-Generated Subjective Global Assessment and Nutritional Risk Screening 2002. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Meshesha, Lidia Z.; Tsui, Judith I.; Liebschutz, Jane M.; Crooks, Denise; Anderson, Bradley J.; Herman, Debra S.; Stein, Michael D.
This study examined associations between substance use behaviors and self-reported health among hospitalized heroin users. Of the 112 participants, 53 (47%) reported good or better health. In multivariable logistic regression models, each day of heroin use in the last month was associated with an 8% lower odds of reporting health as good or better (OR=.92; 95%CI 0.87, 0.97, p < .05). Cocaine, cannabis, cigarettes, alcohol use, unintentional overdose, nor injection drug use were associated with health status. PMID:24045030
Lone, Nazir I; Lee, Robert; Salisbury, Lisa; Donaghy, Eddie; Ramsay, Pamela; Rattray, Janice; Walsh, Timothy S
Intensive care unit (ICU) survivors experience high levels of morbidity after hospital discharge and are at high risk of unplanned hospital readmission. Identifying those at highest risk before hospital discharge may allow targeting of novel risk reduction strategies. We aimed to identify risk factors for unplanned 90-day readmission, develop a risk prediction model and assess its performance to screen for ICU survivors at highest readmission risk. Population cohort study linking registry data for patients discharged from general ICUs in Scotland (2005-2013). Independent risk factors for 90-day readmission and discriminant ability (c-index) of groups of variables were identified using multivariable logistic regression. Derivation and validation risk prediction models were constructed using a time-based split. Of 55 975 ICU survivors, 24.1% (95%CI 23.7% to 24.4%) had unplanned 90-day readmission. Pre-existing health factors were fair discriminators of readmission (c-index 0.63, 95% CI 0.63 to 0.64) but better than acute illness factors (0.60) or demographics (0.54). In a subgroup of those with no comorbidity, acute illness factors (0.62) were better discriminators than pre-existing health factors (0.56). Overall model performance and calibration in the validation cohort was fair (0.65, 95% CI 0.64 to 0.66) but did not perform sufficiently well as a screening tool, demonstrating high false-positive/false-negative rates at clinically relevant thresholds. Unplanned 90-day hospital readmission is common. Pre-existing illness indices are better predictors of readmission than acute illness factors. Identifying additional patient-centred drivers of readmission may improve risk prediction models. Improved understanding of risk factors that are amenable to intervention could improve the clinical and cost-effectiveness of post-ICU care and rehabilitation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights
Agrawal, Swastik; Sharma, Surendra Kumar; Sreenivas, Vishnubhatla; Lakshmy, Ramakrishnan; Mishra, Hemant K
Syndrome Z is the occurrence of metabolic syndrome (MS) with obstructive sleep apnea. Knowledge of its risk factors is useful to screen patients requiring further evaluation for syndrome Z. Consecutive patients referred from sleep clinic undergoing polysomnography in the Sleep Laboratory of AIIMS Hospital, New Delhi were screened between June 2008 and May 2010, and 227 patients were recruited. Anthropometry, body composition analysis, blood pressure, fasting blood sugar, and lipid profile were measured. MS was defined using the National Cholesterol Education Program (adult treatment panel III) criteria, with Asian cutoff values for abdominal obesity. Prevalence of MS and syndrome Z was 74% and 65%, respectively. Age, percent body fat, excessive daytime sleepiness (EDS), and ΔSaO(2) (defined as difference between baseline and minimum SaO(2) during polysomnography) were independently associated with syndrome Z. Using a cutoff of 15% for level of desaturation, the stepped predictive score using these risk factors had sensitivity, specificity, positive predictive value, and negative predictive value of 75%, 73%, 84%, and 61%, respectively for the diagnosis of syndrome Z. It correctly characterized presence of syndrome Z 75% of the time and obviated need for detailed evaluation in 42% of the screened subjects. A large proportion of patients presenting to sleep clinics have MS and syndrome Z. Age, percent body fat, EDS, and ΔSaO(2) are independent risk factors for syndrome Z. A stepped predictive score using these parameters is cost-effective and useful in diagnosing syndrome Z in resource-limited settings.
Full Text Available Abstract Background Drug prescribing errors are frequent in the hospital setting and pharmacists play an important role in detection of these errors. The objectives of this study are (1 to describe the drug prescribing errors rate during the patient's stay, (2 to find which characteristics for a prescribing error are the most predictive of their reproduction the next day despite pharmacist's alert (i.e. override the alert. Methods We prospectively collected all medication order lines and prescribing errors during 18 days in 7 medical wards' using computerized physician order entry. We described and modelled the errors rate according to the chronology of hospital stay. We performed a classification and regression tree analysis to find which characteristics of alerts were predictive of their overriding (i.e. prescribing error repeated. Results 12 533 order lines were reviewed, 117 errors (errors rate 0.9% were observed and 51% of these errors occurred on the first day of the hospital stay. The risk of a prescribing error decreased over time. 52% of the alerts were overridden (i.e error uncorrected by prescribers on the following day. Drug omissions were the most frequently taken into account by prescribers. The classification and regression tree analysis showed that overriding pharmacist's alerts is first related to the ward of the prescriber and then to either Anatomical Therapeutic Chemical class of the drug or the type of error. Conclusions Since 51% of prescribing errors occurred on the first day of stay, pharmacist should concentrate his analysis of drug prescriptions on this day. The difference of overriding behavior between wards and according drug Anatomical Therapeutic Chemical class or type of error could also guide the validation tasks and programming of electronic alerts.
Benbenishty, Rami; Jedwab, Merav; Chen, Wendy; Glasser, Saralee; Slutzky, Hanna; Siegal, Gil; Lavi-Sahar, Zohar; Lerner-Geva, Liat
This study examines judgments made by hospital-based child protection teams (CPTs) when determining if there is reasonable suspicion that a child has been maltreated, and whether to report the case to a community welfare agency, to child protective services (CPS) and/or to the police. A prospective multi-center study of all 968 consecutive cases referred to CPTs during 2010-2011 in six medical centers in Israel. Centers were purposefully selected to represent the heterogeneity of medical centers in Israel in terms of size, geographical location and population characteristics. A structured questionnaire was designed to capture relevant information and judgments on each child referred to the team. Bivariate associations and multivariate multinomial logistic regressions were conducted to predict whether the decisions would be (a) to close the case, (b) to refer the case to community welfare services, or (c) to report it to CPS and/or the police. Bivariate and multivariate analyses identified a large number of case characteristics associated with higher probability of reporting to CPS/police or of referral to community welfare services. Case characteristics associated with the decisions include socio-demographic (e.g., ethnicity and financial status), parental functioning (e.g., mental health), previous contacts with authorities and hospital, current referral characteristics (e.g., parental referral vs. child referral), physical findings, and suspicious behaviors of child and parent. Most of the findings suggest that decisions of CPTs are based on indices that have strong support in the professional literature. Existing heterogeneity between cases, practitioners and medical centers had an impact on the overall predictability of the decision to report. Attending to collaboration between hospitals and community agencies is suggested to support learning and quality improvement. Copyright © 2013 Elsevier Ltd. All rights reserved.
Baek, Myoung-Ha; Heo, Young-Ran
Malnutrition in the elderly is a serious problem, prevalent in both hospitals and care homes. Due to the absence of a gold standard for malnutrition, herein we evaluate the efficacy of five nutritional screening tools developed or used for the elderly. Elected medical records of 141 elderly patients (86 men and 55 women, aged 73.5 ± 5.2 years) hospitalized at a geriatric care hospital were analyzed. Nutritional screening was performed using the following tools: Mini Nutrition Assessment (MNA), Mini Nutrition Assessment-Short Form (MNA-SF), Geriatric Nutritional Risk Index (GNRI), Malnutrition Universal Screening Tool (MUST) and Nutritional Risk Screening 2002 (NRS 2002). A combined index for malnutrition was also calculated as a reference tool. Each patient evaluated as malnourished to any degree or at risk of malnutrition according to at least four out of five of the aforementioned tools was categorized as malnourished in the combined index classification. According to the combined index, 44.0% of the patients were at risk of malnutrition to some degree. While the nutritional risk and/or malnutrition varied greatly depending on the tool applied, ranging from 36.2% (MUST) to 72.3% (MNA-SF). MUST showed good validity (sensitivity 80.6%, specificity 98.7%) and almost perfect agreement (k = 0.81) with the combined index. In contrast, MNA-SF showed poor validity (sensitivity 100%, specificity 49.4%) and only moderate agreement (k = 0.46) with the combined index. MNA-SF was found to overestimate the nutritional risk in the elderly. MUST appeared to be the most valid and useful screening tool to predict malnutrition in the elderly at a geriatric care hospital.
Ruiz-Castilla, Mireia; Bosacoma, Pau; Dos Santos, Bruce; Baena, Jacinto; Guilabert, Patricia; Marin-Corral, Judith; Masclans, Joan R; Roca, Oriol; Barret, Juan P
The IL33/ST2 pathway has been implicated in the pathogenesis of different inflammatory diseases. Our aim was to analyze whether plasma levels of biomarkers involved in the IL33/ST2 axis might help to predict mortality in burn patients. Single-center prospective observational cohort pilot study performed at the Burns Unit of the Plastic and Reconstructive Surgery Department of the Vall d'Hebron University Hospital (Barcelona). All patients aged ≥18 years old with second or third-degree burns requiring admission to the Burns Unit were considered for inclusion. Blood samples were taken to measure levels of interleukins (IL)6, IL8, IL33, and soluble suppression of tumorigenicity-2 (sST2) within 24 h of admission to the Burns Unit and at day 3. Results are expressed as medians and interquartile ranges or as frequencies and percentages. Sixty-nine patients (58 [84.1%] male, mean age 52 [35-63] years, total body surface area burned 21% [13%-30%], Abbreviated Burn Severity Index 6 [4-8]) were included. Thirteen (18.8%) finally died in the Burns Unit. Plasma levels of sST2 measured at day 3 after admission demonstrated the best prediction accuracy for survival (area under the ROC curve 0.85 [0.71-0.99]; P < 0.001). The best cutoff point for the AUROC index was estimated to be 2,561. In the Cox proportional hazards model, after adjusting for potential confounding, a plasma sST2 level ≥2,561 measured at day 3 was significantly associated with mortality (HR 6.94 [1.73-27.74]; P = 0.006). Plasma sST2 at day 3 predicts hospital mortality in burn patients.
Rosenman, Stephen; Rodgers, Bryan
To explore how recalled childhood adversity affects trait measures of personality in three age cohorts of an Australian adult population and to examine the effects of particular adversities on adult personality traits. A total of 7485 randomly selected subjects in the age bands of 20-24, 40-44 and 60-64 years were interviewed at the outset of a longitudinal community study of psychological health in the Canberra region of Australia. In the initial interview, subjects answered 17 questions about domestic adversity and three questions on positive aspects of upbringing to age 16 years. Personality traits were measured by Eysenck Personality Questionnaire, Behavioural Activation and Inhibition Scales, Positive and Negative Affect Scales and a measure of dissocial behaviours. Higher levels of childhood adversity substantially increase the risk of high neuroticism (OR = 2.6) and negative affect (OR = 2.6), less for behavioural inhibition (OR = 1.7) and for dissocial behaviour (OR = 1.7). No significant effect is seen for extraversion, psychoticism or behavioural activation. Age and gender had little effect on the pattern of risk. Maternal depression has significant and substantial independent effects on measures of neuroticism and negative affect as well as most other measures of personality. Childhood domestic adversity has substantial associations with clinically important aspects of personality: neuroticism and negative affect. Only small effects are seen on behavioural inhibition and dissocial behaviour, and no significant effect on extraversion and behavioural activation. These unexpected findings contradict clinical belief. Maternal psychological ill-health is pre-eminent among adversities predicting later disadvantageous traits, even for those traits that had only the slightest association with childhood adversity. Consequences of childhood adversity prevail throughout the lifespan in men and women equally. The study underlines the importance of childhood domestic
Ghanizadeh, Ghader; Mirmohamadlou, Ali; Esmaeli, Davoud
Occurrence of Legionella pneumophila can be relevant to the installation age and the presence of heterotrophic plate counts (HPCs). This research illustrates L. pneumophila contamination of hospital water in accordance with the installation age and the presence of HPCs. One hundred and fifty samples were collected from hot and cold water systems and cultured on R2A and BCYE agar. L. pneumophila identification was done via specific biochemical tests. HPCs and L. pneumophila were detected in 96 and 37.3 % of the samples, respectively. The mean of HPCs density was 947 ± 998 CFU/ml; therefore, 52 % of the samples had higher densities than 500 CFU/ml. High densities of HPCs (>500 CFU/ml) led to colonization of L. pneumophila (≥1000 CFU/ml), mainly observed in cooling systems, gynecological, sonography, and NICU wards. Chi(2) test demonstrated that higher densities (>500 CFU/ml) of HPCs and L. pneumophila contamination in cold water were more frequent than warm water (OR: 2.3 and 1.49, respectively). Univariate regressions implied a significant difference between HPCs density and installation age in positive and negative tests of L. pneumophila (OR = 1.1, p installation age on L. pneumophila occurrences (p installation age (r s = 0.33, p installation age are relevant; so, plumbing system renovation with appropriate materials and promotion of the effective efforts for hospital's water quality assurance is highly recommended.
Full Text Available Background In addition to the close contact between patients and medical staff, the contamination of surfaces plays an important role in the transmission of pathogens such as vancomycin-resistant enterococci (VRE. Mathematical modeling is a very convenient tool for hospital infection control as it allows the quantitative prediction of the effects of special hygiene and control interventions. Methods We present a compartmental model which describes the dynamics of transmission from patient to patient, also taking into account the interaction with medical staff and environmental contamination. Empirical data from a VRE outbreak in the onco-haematological unit at the University Medical Center Freiburg (Germany were collected with 100 consecutive admissions being followed up for 90 days. Stochastical simulations were used to predict the prevalence of patients colonised with VRE at the time when at least one of the following interventions were introduced: hand hygiene, disinfection of surfaces, cohorting, screening and antibiotic reduction. Results Graphical figures show the temporal dynamics of several simulation scenarios. If no prevention or intervention is present, simulations based on transmission models predict an expected endemic prevalence per ward of 0.83 (95% CI:0.66, 1.00 after the first infected person enters the unit. Interventions may reduce this prevalence, but only the combination of several interventions can control a VRE outbreak. Conclusions The model predicts that only the combination of several interventions can control an VRE outbreak in this setting. The inclusion of environmental contamination improves the compartmental model and allows a prediction of the efficacy of the disinfection of surfaces. These results can be applied to other settings and will therefore help to understand and control the spread of nosocomial pathogens.
Cook, John T; Black, Maureen; Chilton, Mariana; Cutts, Diana; Ettinger de Cuba, Stephanie; Heeren, Timothy C; Rose-Jacobs, Ruth; Sandel, Megan; Casey, Patrick H; Coleman, Sharon; Weiss, Ingrid; Frank, Deborah A
This review addresses epidemiological, public health, and social policy implications of categorizing young children and their adult female caregivers in the United States as food secure when they live in households with "marginal food security," as indicated by the U.S. Household Food Security Survey Module. Existing literature shows that households in the US with marginal food security are more like food-insecure households than food-secure households. Similarities include socio-demographic characteristics, psychosocial profiles, and patterns of disease and health risk. Building on existing knowledge, we present new research on associations of marginal food security with health and developmental risks in young children (security is positively associated with adverse health outcomes compared with food security, but the strength of the associations is weaker than that for food insecurity as usually defined in the US. Nonoverlapping CIs, when comparing odds of marginally food-secure children's fair/poor health and developmental risk and caregivers' depressive symptoms and fair/poor health with those in food-secure and -insecure families, indicate associations of marginal food security significantly and distinctly intermediate between those of food security and food insecurity. Evidence from reviewed research and the new research presented indicates that households with marginal food security should not be classified as food secure, as is the current practice, but should be reported in a separate discrete category. These findings highlight the potential underestimation of the prevalence of adverse health outcomes associated with exposure to lack of enough food for an active, healthy life in the US and indicate an even greater need for preventive action and policies to limit and reduce exposure among children and mothers.
Muhammad Imran Hasan Khan
Full Text Available Introduction: Dengue virus (DENV affects over half the world’s population in 112 countries, and dengue fever (DF is the second largest arthropod borne infectious global hazard after malaria with complications like Dengue Hemorrhagic Fever (DHF and Dengue Shock Syndrome (DSS accounting for significant morbidity and mortality world over. Pakistan is significantly affected with DENV infection and to-date no study identifying risk factors associated with complications of DF has been done. Methods: 997 confirmed cases of DF were collected in a tertiary care hospital in Lahore, Pakistan and their clinical and biochemical data were collected. Univariate, multivariate and logistics regression analysis was performed to identify risk factors associated with development of DHF and DSS. Results: Bleeding OR 70.7 (CI 38.4-129.9, deranged liver function test OR 1.9 (CI 0.97-0.99, platelet count on admission less than 50,000 x109/L OR 0.16 (CI 0.13-0.19, presence of urinary red blood cells OR 1.4 (CI 0.179-0.900 and presence of urinary protein OR 1.1 (CI 0.191-0.974 were related to development of DHF and DSS.
Park, Juyoung; Kang, Kyungtae
Telecardiology provides mobility for patients who require constant electrocardiogram (ECG) monitoring. However, its safety is dependent on the predictability and robustness of data delivery, which must overcome errors in the wireless channel through which the ECG data are transmitted. We report here a framework that can be used to gauge the applicability of IEEE 802.11 wireless local area network (WLAN) technology to ECG monitoring systems in terms of delay constraints and transmission reliability. For this purpose, a medical-grade WLAN architecture achieved predictable delay through the combination of a medium access control mechanism based on the point coordination function provided by IEEE 802.11 and an error control scheme based on Reed-Solomon coding and block interleaving. The size of the jitter buffer needed was determined by this architecture to avoid service dropout caused by buffer underrun, through analysis of variations in transmission delay. Finally, we assessed this architecture in terms of service latency and reliability by modeling the transmission of uncompressed two-lead electrocardiogram data from the MIT-BIH Arrhythmia Database and highlight the applicability of this wireless technology to telecardiology.
Product Description: As a means to increase the efficiency of chemical safety assessment, there is an interest in using data from molecular and cellular bioassays, conducted in a highly automated fashion using modern robotics, to predict toxicity in humans and wildlife. The prese...
Walby, Fredrik A; Odegaard, Erik; Mehlum, Lars
To investigate the differential impact of DSM-IV axis-I and axis-II disorders on completed suicide and to study if psychiatric comorbidity increases the risk of suicide in currently and previously hospitalized psychiatric patients. A nested case-control design based on case notes from 136 suicides and 166 matched controls. All cases and controls were rediagnosed using the SCID-CV for axis-I and the DSM-IV criteria for axis-II disorders and the inter-rater reliability was satisfactory. Raters were blind to the case and control status and the original hospital diagnoses. Depressive disorders and bipolar disorders were associated with an increased risk of suicide. No such effect was found for comorbidity between axis-I disorders and for comorbidity between axis-I and axis-II disorders. Psychiatric diagnoses, although made using a structured and criteria-based approach, was based on information recorded in case notes. Axis-II comorbidity could only be investigated at an aggregated level. Psychiatric comorbidity did not predict suicide in this sample. Mood disorders did, however, increase the risk significantly independent of history of previous suicide attempts. Both findings can inform identification and treatment of patients at high risk for completed suicide.
Full Text Available OBJECTIVES: Although carbon monoxide poisoning is a major medical emergency, the armamentarium of recognized prognostic biomarkers displays unsatisfactory diagnostic performance for predicting cumulative endpoints. METHODS: We performed a retrospective and observational study to identify all patients admitted for carbon monoxide poisoning during a 2-year period. Complete demographical and clinical information, along with the laboratory data regarding arterial carboxyhemoglobin, hemoglobin, blood lactate and total serum bilirubin, was retrieved. RESULTS: The study population consisted of 38 poisoned patients (23 females and 15 males; mean age 39±21 years. Compared with discharged subjects, hospitalized patients displayed significantly higher values for blood lactate and total serum bilirubin, whereas arterial carboxyhemoglobin and hemoglobin did not differ. In a univariate analysis, hospitalization was significantly associated with blood lactate and total serum bilirubin, but not with age, sex, hemoglobin or carboxyhemoglobin. The diagnostic performance obtained after combining the blood lactate and total serum bilirubin results (area under the curve, 0.90; 95% CI, 0.81-0.99; p<0.001 was better than that obtained for either parameter alone. CONCLUSION: Although it remains unclear whether total serum bilirubin acts as an active player or a bystander, we conclude that the systematic assessment of bilirubin may, alongside lactate levels, provide useful information for clinical decision making regarding carbon monoxide poisoning.
Carr, Daniel F; Pirmohamed, Munir
Adverse drug reactions can be caused by a wide range of therapeutics. Adverse drug reactions affect many bodily organ systems and vary widely in severity. Milder adverse drug reactions often resolve quickly following withdrawal of the casual drug or sometimes after dose reduction. Some adverse drug reactions are severe and lead to significant organ/tissue injury which can be fatal. Adverse drug reactions also represent a financial burden to both healthcare providers and the pharmaceutical industry. Thus, a number of stakeholders would benefit from development of new, robust biomarkers for the prediction, diagnosis, and prognostication of adverse drug reactions. There has been significant recent progress in identifying predictive genomic biomarkers with the potential to be used in clinical settings to reduce the burden of adverse drug reactions. These have included biomarkers that can be used to alter drug dose (for example, Thiopurine methyltransferase (TPMT) and azathioprine dose) and drug choice. The latter have in particular included human leukocyte antigen (HLA) biomarkers which identify susceptibility to immune-mediated injuries to major organs such as skin, liver, and bone marrow from a variety of drugs. This review covers both the current state of the art with regard to genomic adverse drug reaction biomarkers. We also review circulating biomarkers that have the potential to be used for both diagnosis and prognosis, and have the added advantage of providing mechanistic information. In the future, we will not be relying on single biomarkers (genomic/non-genomic), but on multiple biomarker panels, integrated through the application of different omics technologies, which will provide information on predisposition, early diagnosis, prognosis, and mechanisms. Impact statement • Genetic and circulating biomarkers present significant opportunities to personalize patient therapy to minimize the risk of adverse drug reactions. ADRs are a significant heath issue
Abbasi, Moslem; Sadeghi, Hasan; Pirani, Zabih; Vatandoust, Leyla
Background: Nowadays, prevalence of addictive behaviors among bipolar patients is considered to be a serious health threat by the World Health Organization. The aim of this study is to investigate the role of behavioral activation and inhibition systems in predicting addictive behaviors of male patients with bipolar disorder at the Roozbeh Psychiatric Hospital. Materials and Methods: The research method used in this study is correlation. The study population consisted of 80 male patients with bipolar disorder referring to the psychiatrics clinics of Tehran city in 2014 who were referred to the Roozbeh Psychiatric Hospital. To collect data, the international and comprehensive inventory diagnostic interview, behavioral activation and inhibition systems scale, and addictive behaviors scale were used. Results: The results showed that there is a positive and significant relationship between behavioral activation systems and addictive behaviors (addictive eating, alcohol addiction, television addiction, cigarette addiction, mobile addiction, etc.). In addition, correlation between behavioral inhibition systems and addictive behaviors (addictive eating, alcohol addiction, TV addiction, cigarette addiction, mobile addiction) is significantly negative. Finally, regression analysis showed that behavioral activation and inhibition systems could significantly predict 47% of addictive behaviors in patients with bipolar disorder. Conclusions: It can be said that the patients with bipolar disorder use substance and addictive behaviors for enjoyment and as pleasure stimulants; they also use substances to suppress unpleasant stimulants and negative emotions. These results indicate that behavioral activation and inhibition systems have an important role in the incidence and exacerbation of addictive behaviors. Therefore, preventive interventions in this direction seem to be necessary. PMID:28194203
Full Text Available Background : New scoring systems, including the Rapid Emergency Medicine Score (REMS, the Mortality in Emergency Department Sepsis (MEDS score, and the confusion, urea nitrogen, respiratory rate, blood pressure, 65 years and older (CURB-65 score, have been developed for emergency department (ED use in various patient populations. Increasing use of early goal directed therapy (EGDT for the emergent treatment of sepsis introduces a growing population of patients in which the accuracy of these scoring systems has not been widely examined. Objectives : To evaluate the ability of the REMS, MEDS score, and CURB-65 score to predict mortality in septic patients treated with modified EGDT. Materials and Methods : Secondary analysis of data from prospectively identified patients treated with modified EGDT in a large tertiary care suburban community hospital with over 85,000 ED visits annually and 700 inpatient beds, from May 2007 through May 2008. We included all patients with severe sepsis or septic shock, who were treated with our modified EGDT protocol. Our major outcome was in-hospital mortality. The performance of the scores was compared by area under the ROC curves (AUCs.