WorldWideScience

Sample records for validated case-definition algorithm

  1. Establishment of Valid Laboratory Case Definition for Human Leptospirosis

    NARCIS (Netherlands)

    M.G.A. Goris (Marga); M.M.G. Leeflang (Mariska); K.R. Boer (Kimberly); M. Goeijenbier (Marco); E.C.M. van Gorp (Eric); J.F.P. Wagenaar (Jiri); R.A. Hartskeerl (Rudy)

    2011-01-01

    textabstractLaboratory case definition of leptospirosis is scarcely de ned by a solid evaluation that determines cut-off values in the tests that are applied. This study describes the process of determining optimal cut-off titers of laboratory tests for leptospirosis for a valid case definition of

  2. Validation of a Syndromic Case Definition for Detecting Emergency Department Visits Potentially Related to Marijuana.

    Science.gov (United States)

    DeYoung, Kathryn; Chen, Yushiuan; Beum, Robert; Askenazi, Michele; Zimmerman, Cali; Davidson, Arthur J

    Reliable methods are needed to monitor the public health impact of changing laws and perceptions about marijuana. Structured and free-text emergency department (ED) visit data offer an opportunity to monitor the impact of these changes in near-real time. Our objectives were to (1) generate and validate a syndromic case definition for ED visits potentially related to marijuana and (2) describe a method for doing so that was less resource intensive than traditional methods. We developed a syndromic case definition for ED visits potentially related to marijuana, applied it to BioSense 2.0 data from 15 hospitals in the Denver, Colorado, metropolitan area for the period September through October 2015, and manually reviewed each case to determine true positives and false positives. We used the number of visits identified by and the positive predictive value (PPV) for each search term and field to refine the definition for the second round of validation on data from February through March 2016. Of 126 646 ED visits during the first period, terms in 524 ED visit records matched ≥1 search term in the initial case definition (PPV, 92.7%). Of 140 932 ED visits during the second period, terms in 698 ED visit records matched ≥1 search term in the revised case definition (PPV, 95.7%). After another revision, the final case definition contained 6 keywords for marijuana or derivatives and 5 diagnosis codes for cannabis use, abuse, dependence, poisoning, and lung disease. Our syndromic case definition and validation method for ED visits potentially related to marijuana could be used by other public health jurisdictions to monitor local trends and for other emerging concerns.

  3. Validation and optimisation of an ICD-10-coded case definition for sepsis using administrative health data

    Science.gov (United States)

    Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J

    2015-01-01

    Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284

  4. Validation of clinical case definition of acute intussusception in infants in Viet Nam and Australia.

    Science.gov (United States)

    Bines, Julie E; Liem, Nguyen Thanh; Justice, Frances; Son, Tran Ngoc; Carlin, John B; de Campo, Margaret; Jamsen, Kris; Mulholland, Kim; Barnett, Peter; Barnes, Graeme L

    2006-07-01

    To test the sensitivity and specificity of a clinical case definition of acute intussusception in infants to assist health-care workers in settings where diagnostic facilities are not available. Prospective studies were conducted at a major paediatric hospital in Viet Nam (the National Hospital of Pediatrics, Hanoi) from November 2002 to December 2003 and in Australia (the Royal Children's Hospital, Melbourne) from March 2002 to March 2004 using a clinical case definition of intussusception. Diagnosis of intussusception was confirmed by air enema or surgery and validated in a subset of participants by an independent clinician who was blinded to the participant's status. Sensitivity of the definition was evaluated in 584 infants agedclinical features consistent with intussusception but for whom another diagnosis was established (234 infants in Hanoi; 404 in Melbourne). In both locations the definition used was sensitive (96% sensitivity in Hanoi; 98% in Melbourne) and specific (95% specificity in Hanoi; 87% in Melbourne) for intussusception among infants with sufficient data to allow classification (449/533 in Hanoi; 50/51 in Melbourne). Reanalysis of patients with missing data suggests that modifying minor criteria would increase the applicability of the definition while maintaining good sensitivity (96-97%) and specificity (83-89%). The clinical case definition was sensitive and specific for the diagnosis of acute intussusception in infants in both a developing country and a developed country but minor modifications would enable it to be used more widely.

  5. Validation of a case definition to define hypertension using administrative data.

    Science.gov (United States)

    Quan, Hude; Khan, Nadia; Hemmelgarn, Brenda R; Tu, Karen; Chen, Guanmin; Campbell, Norm; Hill, Michael D; Ghali, William A; McAlister, Finlay A

    2009-12-01

    We validated the accuracy of case definitions for hypertension derived from administrative data across time periods (year 2001 versus 2004) and geographic regions using physician charts. Physician charts were randomly selected in rural and urban areas from Alberta and British Columbia, Canada, during years 2001 and 2004. Physician charts were linked with administrative data through unique personal health number. We reviewed charts of approximately 50 randomly selected patients >35 years of age from each clinic within 48 urban and 16 rural family physician clinics to identify physician diagnoses of hypertension during the years 2001 and 2004. The validity indices were estimated for diagnosed hypertension using 3 years of administrative data for the 8 case-definition combinations. Of the 3,362 patient charts reviewed, the prevalence of hypertension ranged from 18.8% to 33.3%, depending on the year and region studied. The administrative data hypertension definition of "2 claims within 2 years or 1 hospitalization" had the highest validity relative to the other definitions evaluated (sensitivity 75%, specificity 94%, positive predictive value 81%, negative predictive value 92%, and kappa 0.71). After adjustment for age, sex, and comorbid conditions, the sensitivities between regions, years, and provinces were not significantly different, but the positive predictive value varied slightly across geographic regions. These results provide evidence that administrative data can be used as a relatively valid source of data to define cases of hypertension for surveillance and research purposes.

  6. Analysis of risk factors for schizophrenia with two different case definitions: a nationwide register-based external validation study.

    Science.gov (United States)

    Sørensen, Holger J; Larsen, Janne T; Mors, Ole; Nordentoft, Merete; Mortensen, Preben B; Petersen, Liselotte

    2015-03-01

    Different case definitions of schizophrenia have been used in register based research. However, no previous study has externally validated two different case definitions of schizophrenia against a wide range of risk factors for schizophrenia. We investigated hazard ratios (HRs) for a wide range of risk factors for ICD-10 DCR schizophrenia using a nationwide Danish sample of 2,772,144 residents born in 1955-1997. We compared one contact only (OCO) (the case definition of schizophrenia used in Danish register based studies) with two or more contacts (TMC) (a case definition of at least 2 inpatient contacts with schizophrenia). During the follow-up, the OCO definition included 15,074 and the TMC 7562 cases; i.e. half as many. The TMC case definition appeared to select for a worse illness course. A wide range of risk factors were uniformly associated with both case definitions and only slightly higher risk estimates were found for the TMC definition. Choosing at least 2 inpatient contacts with schizophrenia (TMC) instead of the currently used case definition would result in almost similar risk estimates for many well-established risk factors. However, this would also introduce selection and include considerably fewer cases and reduce power of e.g. genetic studies based on register-diagnosed cases only. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Validation of two case definitions to identify pressure ulcers using hospital administrative data.

    Science.gov (United States)

    Ho, Chester; Jiang, Jason; Eastwood, Cathy A; Wong, Holly; Weaver, Brittany; Quan, Hude

    2017-08-28

    Pressure ulcer development is a quality of care indicator, as pressure ulcers are potentially preventable. Yet pressure ulcer is a leading cause of morbidity, discomfort and additional healthcare costs for inpatients. Methods are lacking for accurate surveillance of pressure ulcer in hospitals to track occurrences and evaluate care improvement strategies. The main study aim was to validate hospital discharge abstract database (DAD) in recording pressure ulcers against nursing consult reports, and to calculate prevalence of pressure ulcers in Alberta, Canada in DAD. We hypothesised that a more inclusive case definition for pressure ulcers would enhance validity of cases identified in administrative data for research and quality improvement purposes. A cohort of patients with pressure ulcers were identified from enterostomal (ET) nursing consult documents at a large university hospital in 2011. There were 1217 patients with pressure ulcers in ET nursing documentation that were linked to a corresponding record in DAD to validate DAD for correct and accurate identification of pressure ulcer occurrence, using two case definitions for pressure ulcer. Using pressure ulcer definition 1 (7 codes), prevalence was 1.4%, and using definition 2 (29 codes), prevalence was 4.2% after adjusting for misclassifications. The results were lower than expected. Definition 1 sensitivity was 27.7% and specificity was 98.8%, while definition 2 sensitivity was 32.8% and specificity was 95.9%. Pressure ulcer in both DAD and ET consultation increased with age, number of comorbidities and length of stay. DAD underestimate pressure ulcer prevalence. Since various codes are used to record pressure ulcers in DAD, the case definition with more codes captures more pressure ulcer cases, and may be useful for monitoring facility trends. However, low sensitivity suggests that this data source may not be accurate for determining overall prevalence, and should be cautiously compared with other

  8. Validation of clinical case definition of acute intussusception in infants in Viet Nam and Australia.

    OpenAIRE

    Bines, JE; Liem, NT; Justice, F; Son, TN; Carlin, JB; de Campo, M; Jamsen, K; Mulholland, K; Barnett, P; Barnes, GL

    2006-01-01

    OBJECTIVE: To test the sensitivity and specificity of a clinical case definition of acute intussusception in infants to assist health-care workers in settings where diagnostic facilities are not available. METHODS: Prospective studies were conducted at a major paediatric hospital in Viet Nam (the National Hospital of Pediatrics, Hanoi) from November 2002 to December 2003 and in Australia (the Royal Children's Hospital, Melbourne) from March 2002 to March 2004 using a clinical case definition ...

  9. Empirical Derivation and Validation of a Clinical Case Definition for Neuropsychological Impairment in Children and Adolescents.

    Science.gov (United States)

    Beauchamp, Miriam H; Brooks, Brian L; Barrowman, Nick; Aglipay, Mary; Keightley, Michelle; Anderson, Peter; Yeates, Keith O; Osmond, Martin H; Zemek, Roger

    2015-09-01

    Neuropsychological assessment aims to identify individual performance profiles in multiple domains of cognitive functioning; however, substantial variation exists in how deficits are defined and what cutoffs are used, and there is no universally accepted definition of neuropsychological impairment. The aim of this study was to derive and validate a clinical case definition rule to identify neuropsychological impairment in children and adolescents. An existing normative pediatric sample was used to calculate base rates of abnormal functioning on eight measures covering six domains of neuropsychological functioning. The dataset was analyzed by varying the range of cutoff levels [1, 1.5, and 2 standard deviations (SDs) below the mean] and number of indicators of impairment. The derived rule was evaluated by bootstrap, internal and external clinical validation (orthopedic and traumatic brain injury). Our neuropsychological impairment (NPI) rule was defined as "two or more test scores that fall 1.5 SDs below the mean." The rule identifies 5.1% of the total sample as impaired in the assessment battery and consistently targets between 3 and 7% of the population as impaired even when age, domains, and number of tests are varied. The NPI rate increases in groups known to exhibit cognitive deficits. The NPI rule provides a psychometrically derived method for interpreting performance across multiple tests and may be used in children 6-18 years. The rule may be useful to clinicians and scientists who wish to establish whether specific individuals or clinical populations present within expected norms versus impaired function across a battery of neuropsychological tests.

  10. Evaluation of surveillance case definition in the diagnosis of leptospirosis, using the Microscopic Agglutination Test: a validation study.

    Science.gov (United States)

    Dassanayake, Dinesh L B; Wimalaratna, Harith; Agampodi, Suneth B; Liyanapathirana, Veranja C; Piyarathna, Thibbotumunuwe A C L; Goonapienuwala, Bimba L

    2009-04-22

    Leptospirosis is endemic in both urban and rural areas of Sri Lanka and there had been many out breaks in the recent past. This study was aimed at validating the leptospirosis surveillance case definition, using the Microscopic Agglutination Test (MAT). The study population consisted of patients with undiagnosed acute febrile illness who were admitted to the medical wards of the Teaching Hospital Kandy, from 1st July 2007 to 31st July 2008. The subjects were screened to diagnose leptospirosis according to the leptospirosis case definition. MAT was performed on blood samples taken from each patient on the 7th day of fever. Leptospirosis case definition was evaluated in regard to sensitivity, specificity and predictive values, using a MAT titre >or= 1:800 for confirming leptospirosis. A total of 123 patients were initially recruited of which 73 had clinical features compatible with the surveillance case definition. Out of the 73 only 57 had a positive MAT result (true positives) leaving 16 as false positives. Out of the 50 who didn't have clinical features compatible with the case definition 45 had a negative MAT as well (true negatives), therefore 5 were false negatives. Total number of MAT positives was 62 out of 123. According to these results the test sensitivity was 91.94%, specificity 73.77%, positive predictive value and negative predictive values were 78.08% and 90% respectively. Diagnostic accuracy of the test was 82.93%. This study confirms that the surveillance case definition has a very high sensitivity and negative predictive value with an average specificity in diagnosing leptospirosis, based on a MAT titre of >or= 1: 800.

  11. Validation of a published case definition for tuberculosis-associated immune reconstitution inflammatory syndrome.

    Science.gov (United States)

    Haddow, Lewis J; Moosa, Mahomed-Yunus S; Easterbrook, Philippa J

    2010-01-02

    To evaluate the International Network for the Study of HIV-associated IRIS (INSHI) case definitions for tuberculosis (TB)-associated immune reconstitution inflammatory syndrome (IRIS) in a South African cohort. Prospective cohort of 498 adult HIV-infected patients initiating antiretroviral therapy. Patients were followed up for 24 weeks and all clinical events were recorded. Events with TB-IRIS as possible cause were assessed by consensus expert opinion and INSHI case definition. Positive, negative, and chance-corrected agreement (kappa) were calculated, and reasons for disagreement were assessed. One hundred and two (20%) patients were receiving TB therapy at antiretroviral therapy initiation. Three hundred and thirty-three events were evaluated (74 potential paradoxical IRIS, 259 potential unmasking IRIS). Based on expert opinion, there were 18 cases of paradoxical IRIS associated with TB and/or other opportunistic disease. The INSHI criteria for TB-IRIS agreed in 13 paradoxical cases, giving positive agreement of 72.2%, negative agreement in 52/56 non-TB-IRIS events (92.9%), and kappa of 0.66. There were 19 unmasking TB-IRIS cases based on expert opinion, of which 12 were considered IRIS using the INSHI definition (positive agreement 63.2%). There was agreement in all 240 non-TB-IRIS events (negative agreement 100%) and kappa was 0.76. There was good agreement between the INSHI case definition for both paradoxical and unmasking TB-IRIS and consensus expert opinion. These results support the use of this definition in clinical and research practice, with minor caveats in its application.

  12. Systematic review of validated case definitions for diabetes in ICD-9-coded and ICD-10-coded data in adult populations.

    Science.gov (United States)

    Khokhar, Bushra; Jette, Nathalie; Metcalfe, Amy; Cunningham, Ceara Tess; Quan, Hude; Kaplan, Gilaad G; Butalia, Sonia; Rabi, Doreen

    2016-08-05

    With steady increases in 'big data' and data analytics over the past two decades, administrative health databases have become more accessible and are now used regularly for diabetes surveillance. The objective of this study is to systematically review validated International Classification of Diseases (ICD)-based case definitions for diabetes in the adult population. Electronic databases, MEDLINE and Embase, were searched for validation studies where an administrative case definition (using ICD codes) for diabetes in adults was validated against a reference and statistical measures of the performance reported. The search yielded 2895 abstracts, and of the 193 potentially relevant studies, 16 met criteria. Diabetes definition for adults varied by data source, including physician claims (sensitivity ranged from 26.9% to 97%, specificity ranged from 94.3% to 99.4%, positive predictive value (PPV) ranged from 71.4% to 96.2%, negative predictive value (NPV) ranged from 95% to 99.6% and κ ranged from 0.8 to 0.9), hospital discharge data (sensitivity ranged from 59.1% to 92.6%, specificity ranged from 95.5% to 99%, PPV ranged from 62.5% to 96%, NPV ranged from 90.8% to 99% and κ ranged from 0.6 to 0.9) and a combination of both (sensitivity ranged from 57% to 95.6%, specificity ranged from 88% to 98.5%, PPV ranged from 54% to 80%, NPV ranged from 98% to 99.6% and κ ranged from 0.7 to 0.8). Overall, administrative health databases are useful for undertaking diabetes surveillance, but an awareness of the variation in performance being affected by case definition is essential. The performance characteristics of these case definitions depend on the variations in the definition of primary diagnosis in ICD-coded discharge data and/or the methodology adopted by the healthcare facility to extract information from patient records. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Validation of a case definition to define chronic dialysis using outpatient administrative data.

    Science.gov (United States)

    Clement, Fiona M; James, Matthew T; Chin, Rick; Klarenbach, Scott W; Manns, Braden J; Quinn, Robert R; Ravani, Pietro; Tonelli, Marcello; Hemmelgarn, Brenda R

    2011-03-01

    Administrative health care databases offer an efficient and accessible, though as-yet unvalidated, approach to studying outcomes of patients with chronic kidney disease and end-stage renal disease (ESRD). The objective of this study is to determine the validity of outpatient physician billing derived algorithms for defining chronic dialysis compared to a reference standard ESRD registry. A cohort of incident dialysis patients (Jan. 1-Dec. 31, 2008) and prevalent chronic dialysis patients (Jan 1, 2008) was selected from a geographically inclusive ESRD registry and administrative database. Four administrative data definitions were considered: at least 1 outpatient claim, at least 2 outpatient claims, at least 2 outpatient claims at least 90 days apart, and continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. Measures of agreement of the four administrative data definitions were compared to a reference standard (ESRD registry). Basic patient characteristics are compared between all 5 patient groups. 1,118,097 individuals formed the overall population and 2,227 chronic dialysis patients were included in the ESRD registry. The three definitions requiring at least 2 outpatient claims resulted in kappa statistics between 0.60-0.80 indicating "substantial" agreement. "At least 1 outpatient claim" resulted in "excellent" agreement with a kappa statistic of 0.81. Of the four definitions, the simplest (at least 1 outpatient claim) performed comparatively to other definitions. The limitations of this work are the billing codes used are developed in Canada, however, other countries use similar billing practices and thus the codes could easily be mapped to other systems. Our reference standard ESRD registry may not capture all dialysis patients resulting in some misclassification. The registry is linked to on-going care so this is likely to be minimal. The definition utilized will vary with the research objective.

  14. Validation of a case definition to define chronic dialysis using outpatient administrative data

    Directory of Open Access Journals (Sweden)

    Klarenbach Scott W

    2011-03-01

    Full Text Available Abstract Background Administrative health care databases offer an efficient and accessible, though as-yet unvalidated, approach to studying outcomes of patients with chronic kidney disease and end-stage renal disease (ESRD. The objective of this study is to determine the validity of outpatient physician billing derived algorithms for defining chronic dialysis compared to a reference standard ESRD registry. Methods A cohort of incident dialysis patients (Jan. 1 - Dec. 31, 2008 and prevalent chronic dialysis patients (Jan 1, 2008 was selected from a geographically inclusive ESRD registry and administrative database. Four administrative data definitions were considered: at least 1 outpatient claim, at least 2 outpatient claims, at least 2 outpatient claims at least 90 days apart, and continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. Measures of agreement of the four administrative data definitions were compared to a reference standard (ESRD registry. Basic patient characteristics are compared between all 5 patient groups. Results 1,118,097 individuals formed the overall population and 2,227 chronic dialysis patients were included in the ESRD registry. The three definitions requiring at least 2 outpatient claims resulted in kappa statistics between 0.60-0.80 indicating "substantial" agreement. "At least 1 outpatient claim" resulted in "excellent" agreement with a kappa statistic of 0.81. Conclusions Of the four definitions, the simplest (at least 1 outpatient claim performed comparatively to other definitions. The limitations of this work are the billing codes used are developed in Canada, however, other countries use similar billing practices and thus the codes could easily be mapped to other systems. Our reference standard ESRD registry may not capture all dialysis patients resulting in some misclassification. The registry is linked to on-going care so this is likely to be minimal. The definition

  15. Validation of administrative and clinical case definitions for gestational diabetes mellitus against laboratory results.

    Science.gov (United States)

    Bowker, S L; Savu, A; Donovan, L E; Johnson, J A; Kaul, P

    2017-06-01

    To examine the validity of International Classification of Disease, version 10 (ICD-10) codes for gestational diabetes mellitus in administrative databases (outpatient and inpatient), and in a clinical perinatal database (Alberta Perinatal Health Program), using laboratory data as the 'gold standard'. Women aged 12-54 years with in-hospital, singleton deliveries between 1 October 2008 and 31 March 2010 in Alberta, Canada were included in the study. A gestational diabetes diagnosis was defined in the laboratory data as ≥2 abnormal values on a 75-g oral glucose tolerance test or a 50-g glucose screen ≥10.3 mmol/l. Of 58 338 pregnancies, 2085 (3.6%) met gestational diabetes criteria based on laboratory data. The gestational diabetes rates in outpatient only, inpatient only, outpatient or inpatient combined, and Alberta Perinatal Health Program databases were 5.2% (3051), 4.8% (2791), 5.8% (3367) and 4.8% (2825), respectively. Although the outpatient or inpatient combined data achieved the highest sensitivity (92%) and specificity (97%), it was associated with a positive predictive value of only 57%. The majority of the false-positives (78%), however, had one abnormal value on oral glucose tolerance test, corresponding to a diagnosis of impaired glucose tolerance in pregnancy. The ICD-10 codes for gestational diabetes in administrative databases, especially when outpatient and inpatient databases are combined, can be used to reliably estimate the burden of the disease at the population level. Because impaired glucose tolerance in pregnancy and gestational diabetes may be managed similarly in clinical practice, impaired glucose tolerance in pregnancy is often coded as gestational diabetes. © 2016 Diabetes UK.

  16. Validation of a case definition for leptospirosis diagnosis in patients with acute severe febrile disease admitted in reference hospitals at the State of Pernambuco, Brazil.

    Science.gov (United States)

    Albuquerque Filho, Alfredo Pereira Leite de; Araújo, Jéssica Guido de; Souza, Inacelli Queiroz de; Martins, Luciana Cardoso; Oliveira, Marta Iglis de; Silva, Maria Jesuíta Bezerra da; Montarroyos, Ulisses Ramos; Miranda Filho, Demócrito de Barros

    2011-01-01

    Leptospirosis is often mistaken for other acute febrile illnesses because of its nonspecific presentation. Bacteriologic, serologic, and molecular methods have several limitations for early diagnosis: technical complexity, low availability, low sensitivity in early disease, or high cost. This study aimed to validate a case definition, based on simple clinical and laboratory tests, that is intended for bedside diagnosis of leptospirosis among hospitalized patients. Adult patients, admitted to two reference hospitals in Recife, Brazil, with a febrile illness of less than 21 days and with a clinical suspicion of leptospirosis, were included to test a case definition comprising ten clinical and laboratory criteria. Leptospirosis was confirmed or excluded by a composite reference standard (microscopic agglutination test, ELISA, and blood culture). Test properties were determined for each cutoff number of the criteria from the case definition. Ninety seven patients were included; 75 had confirmed leptospirosis and 22 did not. Mean number of criteria from the case definition that were fulfilled was 7.8±1.2 for confirmed leptospirosis and 5.9±1.5 for non-leptospirosis patients (pcase definition, for a cutoff of at least 7 criteria, reached average sensitivity and specificity, but with a high positive predictive value. Its simplicity and low cost make it useful for rapid bedside leptospirosis diagnosis in Brazilian hospitalized patients with acute severe febrile disease.

  17. Validation of a case definition for leptospirosis diagnosis in patients with acute severe febrile disease admitted in reference hospitals at the State of Pernambuco, Brazil

    Directory of Open Access Journals (Sweden)

    Alfredo Pereira Leite de Albuquerque Filho

    2011-12-01

    Full Text Available INTRODUCTION: Leptospirosis is often mistaken for other acute febrile illnesses because of its nonspecific presentation. Bacteriologic, serologic, and molecular methods have several limitations for early diagnosis: technical complexity, low availability, low sensitivity in early disease, or high cost. This study aimed to validate a case definition, based on simple clinical and laboratory tests, that is intended for bedside diagnosis of leptospirosis among hospitalized patients. METHODS: Adult patients, admitted to two reference hospitals in Recife, Brazil, with a febrile illness of less than 21 days and with a clinical suspicion of leptospirosis, were included to test a case definition comprising ten clinical and laboratory criteria. Leptospirosis was confirmed or excluded by a composite reference standard (microscopic agglutination test, ELISA, and blood culture. Test properties were determined for each cutoff number of the criteria from the case definition. RESULTS: Ninety seven patients were included; 75 had confirmed leptospirosis and 22 did not. Mean number of criteria from the case definition that were fulfilled was 7.8±1.2 for confirmed leptospirosis and 5.9±1.5 for non-leptospirosis patients (p<0.0001. Best sensitivity (85.3% and specificity (68.2% combination was found with a cutoff of 7 or more criteria, reaching positive and negative predictive values of 90.1% and 57.7%, respectively; accuracy was 81.4%. CONCLUSIONS: The case definition, for a cutoff of at least 7 criteria, reached average sensitivity and specificity, but with a high positive predictive value. Its simplicity and low cost make it useful for rapid bedside leptospirosis diagnosis in Brazilian hospitalized patients with acute severe febrile disease.

  18. Development of a validated clinical case definition of generalized tonic-clonic seizures for use by community-based health care providers.

    Science.gov (United States)

    Anand, Krishnan; Jain, Satish; Paul, Eldho; Srivastava, Achal; Sahariah, Sirazul A; Kapoor, Suresh K

    2005-05-01

    To develop and test a clinical case definition for identification of generalized tonic-clonic seizures (GTCSs) by community-based health care providers. To identify symptoms that can help identify GTCSs, patients with history of a jerky movements or rigidity in any part of the body ever in life were recruited from three sites: the community, secondary care hospital, and tertiary care hospital. These patients were administered a 14-item structured interview schedule focusing on the circumstances surrounding the seizure. Subsequently, a neurologist examined each patient and, based on available investigations, classified them as GTCS or non-GTCS cases. A logistic regression analysis was performed to select symptoms that were to be used for case definition of GTCSs. Validity parameters for the case definition at different cutoff points were calculated in another set of subjects. In total, 339 patients were enrolled in the first phase of the study. The tertiary care hospital contributed the maximal number of GTCS cases, whereas cases of non-GTCS were mainly from the community. At the end of phase I, the questionnaire was shortened from 14 to eight questions based on statistical association and clinical judgment. After phase II, which was conducted among 170 subjects, three variables were found to be significantly related to the presence of GTCSs by logistic regression: absence of stress (13.1; 4.1-41.3), presence of frothing (13.7; 4.0-47.3), and occurrence in sleep (8.3; 2.0-34.9). As a case definition using only three variables did not provide sufficient specificity, three more variables were added based on univariate analysis of the data (incontinence during the episode and unconsciousness) and review of literature (injury during episode). A case definition consisting of giving one point to an affirmative answer for each of the six questions was tested. At a cutoff point of four, sensitivity was 56.9 (47.4-66.0) and specificity, 96.3 (86.2-99.4). Among the 197 GTCS

  19. Soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  20. Clinical case definition for the diagnosis of acute intussusception.

    Science.gov (United States)

    Bines, Julie E; Ivanoff, Bernard; Justice, Frances; Mulholland, Kim

    2004-11-01

    Because of the reported association between intussusception and a rotavirus vaccine, future clinical trials of rotavirus vaccines will need to include intussusception surveillance in the evaluation of vaccine safety. The aim of this study is to develop and validate a clinical case definition for the diagnosis of acute intussusception. A clinical case definition for the diagnosis of acute intussusception was developed by analysis of an extensive literature review that defined the clinical presentation of intussusception in 70 developed and developing countries. The clinical case definition was then assessed for sensitivity and specificity using a retrospective chart review of hospital admissions. Sensitivity of the clinical case definition was assessed in children diagnosed with intussusception over a 6.5-year period. Specificity was assessed in patients aged clinical case definition accurately identified 185 of 191 assessable cases as "probable" intussusception and six cases as "possible" intussusception (sensitivity, 97%). No case of radiologic or surgically proven intussusception failed to be identified by the clinical case definition. The specificity of the definition in correctly identifying patients who did not have intussusception ranged from 87% to 91%. The clinical case definition for intussusception may assist in the prompt identification of patients with intussusception and may provide an important tool for the future trials of enteric vaccines.

  1. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  2. Quantitative validation of a new coregistration algorithm

    International Nuclear Information System (INIS)

    Pickar, R.D.; Esser, P.D.; Pozniakoff, T.A.; Van Heertum, R.L.; Stoddart, H.A. Jr.

    1995-01-01

    A new coregistration software package, Neuro9OO Image Coregistration software, has been developed specifically for nuclear medicine. With this algorithm, the correlation coefficient is maximized between volumes generated from sets of transaxial slices. No localization markers or segmented surfaces are needed. The coregistration program was evaluated for translational and rotational registration accuracy. A Tc-99m HM-PAO split-dose study (0.53 mCi low dose, L, and 1.01 mCi high dose, H) was simulated with a Hoffman Brain Phantom with five fiducial markers. Translation error was determined by a shift in image centroid, and rotation error was determined by a simplified two-axis approach. Changes in registration accuracy were measured with respect to: (1) slice spacing, using the four different combinations LL, LH, HL, HH, (2) translational and rotational misalignment before coregistration, (3) changes in the step size of the iterative parameters. In all the cases the algorithm converged with only small difference in translation offset, 0 and 0. At 6 nun slice spacing, translational efforts ranged from 0.9 to 2.8 mm (system resolution at 100 mm, 6.8 mm). The converged parameters showed little sensitivity to count density. In addition the correlation coefficient increased with decreasing iterative step size, as expected. From these experiments, the authors found that this algorithm based on the maximization of the correlation coefficient between studies was an accurate way to coregister SPECT brain images

  3. Using wound care algorithms: a content validation study.

    Science.gov (United States)

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  4. Analysis of risk factors for schizophrenia with two different case definitions

    DEFF Research Database (Denmark)

    Sørensen, Holger J; Tidselbak Larsen, Janne; Mors, Ole

    2015-01-01

    Different case definitions of schizophrenia have been used in register based research. However, no previous study has externally validated two different case definitions of schizophrenia against a wide range of risk factors for schizophrenia. We investigated hazard ratios (HRs) for a wide range...... of risk factors for ICD-10 DCR schizophrenia using a nationwide Danish sample of 2,772,144 residents born in 1955-1997. We compared one contact only (OCO) (the case definition of schizophrenia used in Danish register based studies) with two or more contacts (TMC) (a case definition of at least 2 inpatient...... contacts with schizophrenia). During the follow-up, the OCO definition included 15,074 and the TMC 7562 cases; i.e. half as many. The TMC case definition appeared to select for a worse illness course. A wide range of risk factors were uniformly associated with both case definitions and only slightly higher...

  5. A computer case definition for sudden cardiac death.

    Science.gov (United States)

    Chung, Cecilia P; Murray, Katherine T; Stein, C Michael; Hall, Kathi; Ray, Wayne A

    2010-06-01

    To facilitate studies of medications and sudden cardiac death, we developed and validated a computer case definition for these deaths. The study of community dwelling Tennessee Medicaid enrollees 30-74 years of age utilized a linked database with Medicaid inpatient/outpatient files, state death certificate files, and a state 'all-payers' hospital discharge file. The computerized case definition was developed from a retrospective cohort study of sudden cardiac deaths occurring between 1990 and 1993. Medical records for 926 potential cases had been adjudicated for this study to determine if they met the clinical definition for sudden cardiac death occurring in the community and were likely to be due to ventricular tachyarrhythmias. The computerized case definition included deaths with (1) no evidence of a terminal hospital admission/nursing home stay in any of the data sources; (2) an underlying cause of death code consistent with sudden cardiac death; and (3) no terminal procedures inconsistent with unresuscitated cardiac arrest. This definition was validated in an independent sample of 174 adjudicated deaths occurring between 1994 and 2005. The positive predictive value of the computer case definition was 86.0% in the development sample and 86.8% in the validation sample. The positive predictive value did not vary materially for deaths coded according to the ICO-9 (1994-1998, positive predictive value = 85.1%) or ICD-10 (1999-2005, 87.4%) systems. A computerized Medicaid database, linked with death certificate files and a state hospital discharge database, can be used for a computer case definition of sudden cardiac death. Copyright (c) 2009 John Wiley & Sons, Ltd.

  6. Benchmarking protein classification algorithms via supervised cross-validation

    NARCIS (Netherlands)

    Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.

    2008-01-01

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold,

  7. GCOM-W soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  8. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  9. Fatigue after stroke: the development and evaluation of a case definition.

    Science.gov (United States)

    Lynch, Joanna; Mead, Gillian; Greig, Carolyn; Young, Archie; Lewis, Susan; Sharpe, Michael

    2007-11-01

    While fatigue after stroke is a common problem, it has no generally accepted definition. Our aim was to develop a case definition for post-stroke fatigue and to test its psychometric properties. A case definition with face validity and an associated structured interview was constructed. After initial piloting, the feasibility, reliability (test-retest and inter-rater) and concurrent validity (in relation to four fatigue severity scales) were determined in 55 patients with stroke. All participating patients provided satisfactory answers to all the case definition probe questions demonstrating its feasibility For test-retest reliability, kappa was 0.78 (95% CI, 0.57-0.94, Pdefinition also had substantially higher fatigue scores on four fatigue severity scales (Pvalidity. The proposed case definition is feasible to administer and reliable in practice, and there is evidence of concurrent validity. It requires further evaluation in different settings.

  10. Construct validation of an interactive digital algorithm for ostomy care.

    Science.gov (United States)

    Beitz, Janice M; Gerlach, Mary A; Schafer, Vickie

    2014-01-01

    The purpose of this study was to evaluate construct validity for a previously face and content validated Ostomy Algorithm using digital real-life clinical scenarios. A cross-sectional, mixed-methods Web-based survey design study was conducted. Two hundred ninety-seven English-speaking RNs completed the study; participants practiced in both acute care and postacute settings, with 1 expert ostomy nurse (WOC nurse) and 2 nonexpert nurses. Following written consent, respondents answered demographic questions and completed a brief algorithm tutorial. Participants were then presented with 7 ostomy-related digital scenarios consisting of real-life photos and pertinent clinical information. Respondents used the 11 assessment components of the digital algorithm to choose management options. Participant written comments about the scenarios and the research process were collected. The mean overall percentage of correct responses was 84.23%. Mean percentage of correct responses for respondents with a self-reported basic ostomy knowledge was 87.7%; for those with a self-reported intermediate ostomy knowledge was 85.88% and those who were self-reported experts in ostomy care achieved 82.77% correct response rate. Five respondents reported having no prior ostomy care knowledge at screening and achieved an overall 45.71% correct response rate. No negative comments regarding the algorithm were recorded by participants. The new standardized Ostomy Algorithm remains the only face, content, and construct validated digital clinical decision instrument currently available. Further research on application at the bedside while tracking patient outcomes is warranted.

  11. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits

    Directory of Open Access Journals (Sweden)

    Lieberman Rebecca M

    2008-04-01

    Full Text Available Abstract Background Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. Methods This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3. We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. Results We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64% cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8, often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2 identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2% true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86–92 for

  12. Evaluation of a surveillance case definition for anogenital warts, Kaiser Permanente northwest.

    Science.gov (United States)

    Naleway, Allison L; Weinmann, Sheila; Crane, Brad; Gee, Julianne; Markowitz, Lauri E; Dunne, Eileen F

    2014-08-01

    Most studies of anogenital wart (AGW) epidemiology have used large clinical or administrative databases and unconfirmed case definitions based on combinations of diagnosis and procedure codes. We developed and validated an AGW case definition using a combination of diagnosis codes and other information available in the electronic medical record (provider type, laboratory testing). We calculated the positive predictive value (PPV) of this case definition compared with manual medical record review in a random sample of 250 cases. Using this case definition, we calculated the annual age- and sex-stratified prevalence of AGW among individuals 11 through 30 years of age from 2000 through 2005. We identified 2730 individuals who met the case definition. The PPV of the case definition was 82%, and the average annual prevalence was 4.16 per 1000. Prevalence of AGW was higher in females compared with males in every age group, with the exception of the 27- to 30-year-olds. Among females, prevalence peaked in the 19- to 22-year-olds, and among males, the peak was observed in 23- to 26-year-olds. The case definition developed in this study is the first to be validated with medical record review and has a good PPV for the detection of AGW. The prevalence rates observed in this study were higher than other published rates, but the age- and sex-specific patterns observed were consistent with previous reports.

  13. Validation of MERIS Ocean Color Algorithms in the Mediterranean Sea

    Science.gov (United States)

    Marullo, S.; D'Ortenzio, F.; Ribera D'Alcalà, M.; Ragni, M.; Santoleri, R.; Vellucci, V.; Luttazzi, C.

    2004-05-01

    Satellite ocean color measurements can contribute, better than any other source of data, to quantify the spatial and time variability of ocean productivity and, tanks to the success of several satellite missions starting with CZCS up to SeaWiFS, MODIS and MERIS, it is now possible to start doing the investigation of interannual variations and compare level of production during different decades ([1],[2]). The interannual variability of the ocean productivity at global and regional scale can be correctly measured providing that chlorophyll estimate are based on well calibrated algorithms in order to avoid regional biases and instrumental time shifts. The calibration and validation of Ocean Color data is then one of the most important tasks of several research projects worldwide ([3], [4]). Algorithms developed to retrieve chlorophyll concentration need a specific effort to define the error ranges associated to the estimates. In particular, the empirical algorithms, calculated on regression with in situ data, require independent records to verify the degree of uncertainties associated. In addition several evidences demonstrated that regional algorithms can improve the accuracy of the satellite chlorophyll estimates [5]. In 2002, Santoleri et al. (SIMBIOS) first showed a significant overestimation of the SeaWiFS derived chlorophyll concentration in Mediterranean Sea when the standard global NASA algorithms (OC4v2 and OC4v4) are used. The same authors [6] proposed two preliminary new algorithms for the Mediterranean Sea (L-DORMA and NL-DORMA) on a basis of a bio-optical data set collected in the basin from 1998 to 2000. In 2002 Bricaud et al., [7] analyzing other bio-optical data collected in the Mediterranean, confirmed the overestimation of the chlorophyll concentration in oligotrophic conditions and proposed a new regional algorithm to be used in case of low concentrations. Recently, the number of in situ observations in the basin was increased, permitting a first

  14. Case definition for progressive multifocal leukoencephalopathy following treatment with monoclonal antibodies.

    Science.gov (United States)

    Mentzer, Dirk; Prestel, Jürgen; Adams, Ortwin; Gold, Ralf; Hartung, Hans-Peter; Hengel, Hartmut; Kieseier, Bernd C; Ludwig, Wolf-Dieter; Keller-Stanislawski, Brigitte

    2012-09-01

    Novel immunosuppressive/modulating therapies with monoclonal antibodies (MABs) have been associated with progressive multifocal leukoencephalopathy (PML), a potentially fatal disease of the brain caused by the JC virus. Taking the complex diagnostic testing and heterogeneous clinical presentation of PML into account, an agreed case definition for PML is a prerequisite for a thorough assessment of PML. A working group was established to develop a standardised case definition for PML which permits data comparability across clinical trials, postauthorisation safety studies and passive postmarketing surveillance. The case definition is designed to define levels of diagnostic certainty of reported PML cases following treatment with MABs. It was subsequently used to categorise retrospectively suspected PML cases from Germany reported to the Paul-Ehrlich-Institute as the responsible national competent authority. The algorithm of the case definition is based on clinical symptoms, PCR for JC virus DNA in cerebrospinal fluid, brain MRI, and brain biopsy/autopsy. The case definition was applied to 119 suspected cases of PML following treatment with MABs and is considered to be helpful for case ascertainment of suspected PML cases for various MABs covering a broad spectrum of indications. Even if the available information is not yet complete, the case definition provides a level of diagnostic certainty. The proposed case definition permits data comparability among different medicinal products and among active as well as passive surveillance settings. It may form a basis for meaningful risk analysis and communication for regulators and healthcare professionals.

  15. Case definition terminology for paratuberculosis (Johne's disease).

    Science.gov (United States)

    Whittington, R J; Begg, D J; de Silva, K; Purdie, A C; Dhand, N K; Plain, K M

    2017-11-09

    Paratuberculosis (Johne's disease) is an economically significant condition caused by Mycobacterium avium subsp. paratuberculosis. However, difficulties in diagnosis and classification of individual animals with the condition have hampered research and impeded efforts to halt its progressive spread in the global livestock industry. Descriptive terms applied to individual animals and herds such as exposed, infected, diseased, clinical, sub-clinical, infectious and resistant need to be defined so that they can be incorporated consistently into well-understood and reproducible case definitions. These allow for consistent classification of individuals in a population for the purposes of analysis based on accurate counts. The outputs might include the incidence of cases, frequency distributions of the number of cases by age class or more sophisticated analyses involving statistical comparisons of immune responses in vaccine development studies, or gene frequencies or expression data from cases and controls in genomic investigations. It is necessary to have agreed definitions in order to be able to make valid comparisons and meta-analyses of experiments conducted over time by a given researcher, in different laboratories, by different researchers, and in different countries. In this paper, terms are applied systematically in an hierarchical flow chart to enable classification of individual animals. We propose descriptive terms for different stages in the pathogenesis of paratuberculosis to enable their use in different types of studies and to enable an independent assessment of the extent to which accepted definitions for stages of disease have been applied consistently in any given study. This will assist in the general interpretation of data between studies, and will facilitate future meta-analyses.

  16. Validation of a numerical algorithm based on transformed equations

    International Nuclear Information System (INIS)

    Xu, H.; Barron, R.M.; Zhang, C.

    2003-01-01

    Generally, a typical equation governing a physical process, such as fluid flow or heat transfer, has three types of terms that involve partial derivatives, namely, the transient term, the convective terms and the diffusion terms. The major difficulty in obtaining numerical solutions of these partial differential equations is the discretization of the convective terms. The transient term is usually discretized using the first-order forward or backward differencing scheme. The diffusion terms are usually discretized using the central differencing scheme and no difficulty arises since these terms involve second-order spatial derivatives of the flow variables. The convective terms are non-linear and contain first-order spatial derivatives. The main difference between various numerical algorithms is the discretization of the convective terms. In the present study, an alternative approach to discretizing the governing equations is presented. In this algorithm, the governing equations are first transformed by introducing an exponential function to eliminate the convective terms in the equations. The proposed algorithm is applied to simulate some fluid flows with exact solutions to validate the proposed algorithm. The fluid flows used in this study are a self-designed quasi-fluid flow problem, stagnation in plane flow (Hiemenz flow), and flow between two concentric cylinders. The comparisons with the power-law scheme indicate that the proposed scheme exhibits better performance. (author)

  17. Using virtual environment for autonomous vehicle algorithm validation

    Science.gov (United States)

    Levinskis, Aleksandrs

    2018-04-01

    This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.

  18. The semianalytical cloud retrieval algorithm for SCIAMACHY I. The validation

    Directory of Open Access Journals (Sweden)

    A. A. Kokhanovsky

    2006-01-01

    Full Text Available A recently developed cloud retrieval algorithm for the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY is briefly presented and validated using independent and well tested cloud retrieval techniques based on the look-up-table approach for MODeration resolutIon Spectrometer (MODIS data. The results of the cloud top height retrievals using measurements in the oxygen A-band by an airborne crossed Czerny-Turner spectrograph and the Global Ozone Monitoring Experiment (GOME instrument are compared with those obtained from airborne dual photography and retrievals using data from Along Track Scanning Radiometer (ATSR-2, respectively.

  19. Validation of an automated seizure detection algorithm for term neonates

    Science.gov (United States)

    Mathieson, Sean R.; Stevenson, Nathan J.; Low, Evonne; Marnane, William P.; Rennie, Janet M.; Temko, Andrey; Lightbody, Gordon; Boylan, Geraldine B.

    2016-01-01

    Objective The objective of this study was to validate the performance of a seizure detection algorithm (SDA) developed by our group, on previously unseen, prolonged, unedited EEG recordings from 70 babies from 2 centres. Methods EEGs of 70 babies (35 seizure, 35 non-seizure) were annotated for seizures by experts as the gold standard. The SDA was tested on the EEGs at a range of sensitivity settings. Annotations from the expert and SDA were compared using event and epoch based metrics. The effect of seizure duration on SDA performance was also analysed. Results Between sensitivity settings of 0.5 and 0.3, the algorithm achieved seizure detection rates of 52.6–75.0%, with false detection (FD) rates of 0.04–0.36 FD/h for event based analysis, which was deemed to be acceptable in a clinical environment. Time based comparison of expert and SDA annotations using Cohen’s Kappa Index revealed a best performing SDA threshold of 0.4 (Kappa 0.630). The SDA showed improved detection performance with longer seizures. Conclusion The SDA achieved promising performance and warrants further testing in a live clinical evaluation. Significance The SDA has the potential to improve seizure detection and provide a robust tool for comparing treatment regimens. PMID:26055336

  20. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  1. Osteoporosis-related fracture case definitions for population-based administrative data

    Directory of Open Access Journals (Sweden)

    Lix Lisa M

    2012-05-01

    Full Text Available Abstract Background Population-based administrative data have been used to study osteoporosis-related fracture risk factors and outcomes, but there has been limited research about the validity of these data for ascertaining fracture cases. The objectives of this study were to: (a compare fracture incidence estimates from administrative data with estimates from population-based clinically-validated data, and (b test for differences in incidence estimates from multiple administrative data case definitions. Methods Thirty-five case definitions for incident fractures of the hip, wrist, humerus, and clinical vertebrae were constructed using diagnosis codes in hospital data and diagnosis and service codes in physician billing data from Manitoba, Canada. Clinically-validated fractures were identified from the Canadian Multicentre Osteoporosis Study (CaMos. Generalized linear models were used to test for differences in incidence estimates. Results For hip fracture, sex-specific differences were observed in the magnitude of under- and over-ascertainment of administrative data case definitions when compared with CaMos data. The length of the fracture-free period to ascertain incident cases had a variable effect on over-ascertainment across fracture sites, as did the use of imaging, fixation, or repair service codes. Case definitions based on hospital data resulted in under-ascertainment of incident clinical vertebral fractures. There were no significant differences in trend estimates for wrist, humerus, and clinical vertebral case definitions. Conclusions The validity of administrative data for estimating fracture incidence depends on the site and features of the case definition.

  2. Validation and Algorithms Comparative Study for Microwave Remote Sensing of Snow Depth over China

    International Nuclear Information System (INIS)

    Bin, C J; Qiu, Y B; Shi, L J

    2014-01-01

    In this study, five different snow algorithms (Chang algorithm, GSFC 96 algorithm, AMSR-E SWE algorithm, Improved Tibetan Plateau algorithm and Savoie algorithm) were selected to validate the accuracy of snow algorithms over China. These algorithms were compared for the accuracy of snow depth algorithms with AMSR-E brightness temperature data and ground measurements on February 10-12, 2010. Results showed that the GSFC 96 algorithm was more suitable in Xinjiang with the RMSE range from 6.85cm to 7.48 cm; in Inner Mongolia and Northeast China. Improved Tibetan Plateau algorithm is superior to the other four algorithms with the RMSE of 5.46cm∼6.11cm and 6.21cm∼7.83cm respectively; due to the lack of ground measurements, we couldn't get valid statistical results over the Tibetan Plateau. However, the mean relative error (MRE) of the selected algorithms was ranging from 37.95% to 189.13% in four study areas, which showed that the accuracy of the five snow depth algorithms is limited over China

  3. Aggressive periodontitis: case definition and diagnostic criteria.

    Science.gov (United States)

    Albandar, Jasim M

    2014-06-01

    Aggressive periodontitis is a destructive disease characterized by the following: the involvement of multiple teeth with a distinctive pattern of periodontal tissue loss; a high rate of disease progression; an early age of onset; and the absence of systemic diseases. In some patients periodontal tissue loss may commence before puberty, whereas in most patients the age of onset is during or somewhat after the circumpubertal period. Besides infection with specific microorganisms, a host predisposition seems to play a key role in the pathogenesis of aggressive periodontitis, as evidenced by the familial aggregation of the disease. In this article we review the historical background of the diagnostic criteria of aggressive periodontitis, present a contemporary case definition and describe the clinical parameters of the disease. At present, the diagnosis of aggressive periodontitis is achieved using case history, clinical examination and radiographic evaluation. The data gathered using these methods are prone to relatively high measurement errors. Besides, this diagnostic approach measures past disease history and may not reliably measure existing disease activity or accurately predict future tissue loss. A diagnosis is often made years after the onset of the disease, partly because current assessment methods detect established disease more readily and reliably than they detect incipient or initial lesions where the tissue loss is minimal and usually below the detection threshold of present examination methods. Future advancements in understanding the pathogenesis of this disease may contribute to an earlier diagnosis. Insofar, future case definitions may involve the identification of key etiologic and risk factors, combined with high-precision methodologies that enable the early detection of initial lesions. This may significantly enhance the predictive value of these tests and detect cases of aggressive periodontitis before significant tissue loss develops. © 2014

  4. Validation of Varian's AAA algorithm with focus on lung treatments

    International Nuclear Information System (INIS)

    Roende, Heidi S.; Hoffmann, Lone

    2009-01-01

    The objective of this study was to examine the accuracy of the Anisotropic Analytical Algorithm (AAA). A variety of different field configurations in homogeneous and in inhomogeneous media (lung geometry) was tested for the AAA algorithm. It was also tested against the present Pencil Beam Convolution (PBC) algorithm. Materials and methods. Two dimensional (2D) dose distributions were measured for a variety of different field configurations in solid water with a 2D array of ion chambers. The dose distributions of patient specific treatment plans in selected transversal slices were measured in a Thorax lung phantom with Gafchromic dosimetry films. A Farmer ion chamber was used to check point doses in the Thorax phantom. The 2D dose distributions were evaluated with a gamma criterion of 3% in dose and 3 mm distance to agreement (DTA) for the 2D array measurements and for the film measurements. Results. For AAA, all fields tested in homogeneous media fulfilled the criterion, except asymmetric fields with wedges and intensity modulated plans where deviations of 5 and 4%, respectively, were seen. Overall, the measured and calculated 2D dose distributions for AAA in the Thorax phantom showed good agreement - both for 6 and 15 MV photons. More than 80% of the points in the high dose regions met the gamma criterion, though it failed at low doses and at gradients. For the PBC algorithm only 30-70% of the points met the gamma criterion. Conclusion. The AAA algorithm has been shown to be superior to the PBC algorithm in heterogeneous media, especially for 15 MV. For most treatment plans the deviations in the lung and the mediastinum regions are below 3%. However, the algorithm may underestimate the dose to the spinal cord by up to 7%

  5. An analytic parton shower. Algorithms, implementation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Sebastian

    2012-06-15

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  6. An analytic parton shower. Algorithms, implementation and validation

    International Nuclear Information System (INIS)

    Schmidt, Sebastian

    2012-06-01

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  7. Revised surveillance case definition for HIV infection--United States, 2014.

    Science.gov (United States)

    2014-04-11

    Following extensive consultation and peer review, CDC and the Council of State and Territorial Epidemiologists have revised and combined the surveillance case definitions for human immunodeficiency virus (HIV) infection into a single case definition for persons of all ages (i.e., adults and adolescents aged ≥13 years and children aged case now accommodate new multitest algorithms, including criteria for differentiating between HIV-1 and HIV-2 infection and for recognizing early HIV infection. A confirmed case can be classified in one of five HIV infection stages (0, 1, 2, 3, or unknown); early infection, recognized by a negative HIV test within 6 months of HIV diagnosis, is classified as stage 0, and acquired immunodeficiency syndrome (AIDS) is classified as stage 3. Criteria for stage 3 have been simplified by eliminating the need to differentiate between definitive and presumptive diagnoses of opportunistic illnesses. Clinical (nonlaboratory) criteria for defining a case for surveillance purposes have been made more practical by eliminating the requirement for information about laboratory tests. The surveillance case definition is intended primarily for monitoring the HIV infection burden and planning for prevention and care on a population level, not as a basis for clinical decisions for individual patients. CDC and the Council of State and Territorial Epidemiologists recommend that all states and territories conduct case surveillance of HIV infection using this revised surveillance case definition.

  8. Performance Evaluation of Spectral Clustering Algorithm using Various Clustering Validity Indices

    OpenAIRE

    M. T. Somashekara; D. Manjunatha

    2014-01-01

    In spite of the popularity of spectral clustering algorithm, the evaluation procedures are still in developmental stage. In this article, we have taken benchmarking IRIS dataset for performing comparative study of twelve indices for evaluating spectral clustering algorithm. The results of the spectral clustering technique were also compared with k-mean algorithm. The validity of the indices was also verified with accuracy and (Normalized Mutual Information) NMI score. Spectral clustering algo...

  9. Validities of three multislice algorithms for quantitative low-energy transmission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ming, W.Q.; Chen, J.H., E-mail: jhchen123@hnu.edu.cn

    2013-11-15

    Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations.

  10. Validities of three multislice algorithms for quantitative low-energy transmission electron microscopy

    International Nuclear Information System (INIS)

    Ming, W.Q.; Chen, J.H.

    2013-01-01

    Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations

  11. Validating Machine Learning Algorithms for Twitter Data Against Established Measures of Suicidality.

    Science.gov (United States)

    Braithwaite, Scott R; Giraud-Carrier, Christophe; West, Josh; Barnes, Michael D; Hanson, Carl Lee

    2016-05-16

    One of the leading causes of death in the United States (US) is suicide and new methods of assessment are needed to track its risk in real time. Our objective is to validate the use of machine learning algorithms for Twitter data against empirically validated measures of suicidality in the US population. Using a machine learning algorithm, the Twitter feeds of 135 Mechanical Turk (MTurk) participants were compared with validated, self-report measures of suicide risk. Our findings show that people who are at high suicidal risk can be easily differentiated from those who are not by machine learning algorithms, which accurately identify the clinically significant suicidal rate in 92% of cases (sensitivity: 53%, specificity: 97%, positive predictive value: 75%, negative predictive value: 93%). Machine learning algorithms are efficient in differentiating people who are at a suicidal risk from those who are not. Evidence for suicidality can be measured in nonclinical populations using social media data.

  12. Validation and Intercomparison of Ocean Color Algorithms for Estimating Particulate Organic Carbon in the Oceans

    Directory of Open Access Journals (Sweden)

    Hayley Evers-King

    2017-08-01

    Full Text Available Particulate Organic Carbon (POC plays a vital role in the ocean carbon cycle. Though relatively small compared with other carbon pools, the POC pool is responsible for large fluxes and is linked to many important ocean biogeochemical processes. The satellite ocean-color signal is influenced by particle composition, size, and concentration and provides a way to observe variability in the POC pool at a range of temporal and spatial scales. To provide accurate estimates of POC concentration from satellite ocean color data requires algorithms that are well validated, with uncertainties characterized. Here, a number of algorithms to derive POC using different optical variables are applied to merged satellite ocean color data provided by the Ocean Color Climate Change Initiative (OC-CCI and validated against the largest database of in situ POC measurements currently available. The results of this validation exercise indicate satisfactory levels of performance from several algorithms (highest performance was observed from the algorithms of Loisel et al., 2002; Stramski et al., 2008 and uncertainties that are within the requirements of the user community. Estimates of the standing stock of the POC can be made by applying these algorithms, and yield an estimated mixed-layer integrated global stock of POC between 0.77 and 1.3 Pg C of carbon. Performance of the algorithms vary regionally, suggesting that blending of region-specific algorithms may provide the best way forward for generating global POC products.

  13. Using linked electronic data to validate algorithms for health outcomes in administrative databases.

    Science.gov (United States)

    Lee, Wan-Ju; Lee, Todd A; Pickard, Alan Simon; Shoaibi, Azadeh; Schumock, Glen T

    2015-08-01

    The validity of algorithms used to identify health outcomes in claims-based and administrative data is critical to the reliability of findings from observational studies. The traditional approach to algorithm validation, using medical charts, is expensive and time-consuming. An alternative method is to link the claims data to an external, electronic data source that contains information allowing confirmation of the event of interest. In this paper, we describe this external linkage validation method and delineate important considerations to assess the feasibility and appropriateness of validating health outcomes using this approach. This framework can help investigators decide whether to pursue an external linkage validation method for identifying health outcomes in administrative/claims data.

  14. A prediction algorithm for first onset of major depression in the general population: development and validation.

    Science.gov (United States)

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  15. Validation of differential gene expression algorithms: Application comparing fold-change estimation to hypothesis testing

    Directory of Open Access Journals (Sweden)

    Bickel David R

    2010-01-01

    Full Text Available Abstract Background Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable. Results Two methodologies for assessing predictive error are described: a cross-validation method and a posterior predictive method. As a nonparametric method of estimating prediction error from observed expression levels, cross validation provides an empirical approach to assessing algorithms for detecting differential gene expression that is fully justified for large numbers of biological replicates. Because it leverages the knowledge that only a small portion of genes are differentially expressed, the posterior predictive method is expected to provide more reliable estimates of algorithm performance, allaying concerns about limited biological replication. In practice, the posterior predictive method can assess when its approximations are valid and when they are inaccurate. Under conditions in which its approximations are valid, it corroborates the results of cross validation. Both comparison methodologies are applicable to both single-channel and dual-channel microarrays. For the data sets considered, estimating prediction error by cross validation demonstrates that empirical Bayes methods based on hierarchical models tend to outperform algorithms based on selecting genes by their fold changes or by non-hierarchical model-selection criteria. (The latter two approaches have comparable

  16. Surveillance case definitions for work related upper limb pain syndromes

    OpenAIRE

    Harrington, J. M.; Carter, J. T.; Birrell, L.; Gompertz, D.

    1998-01-01

    OBJECTIVES: To establish consensus case definitions for several common work related upper limb pain syndromes for use in surveillance or studies of the aetiology of these conditions. METHODS: A group of healthcare professionals from the disciplines interested in the prevention and management of upper limb disorders were recruited for a Delphi exercise. A questionnaire was used to establish case definitions from the participants, followed by a consensus conference involving the core grou...

  17. Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Young-Jae Song

    2009-07-01

    Full Text Available Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  18. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    Science.gov (United States)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  19. Cloud detection algorithm comparison and validation for operational Landsat data products

    Science.gov (United States)

    Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady

    2017-01-01

    Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate

  20. Medical chart validation of an algorithm for identifying multiple sclerosis relapse in healthcare claims.

    Science.gov (United States)

    Chastek, Benjamin J; Oleen-Burkey, Merrikay; Lopez-Bresnahan, Maria V

    2010-01-01

    Relapse is a common measure of disease activity in relapsing-remitting multiple sclerosis (MS). The objective of this study was to test the content validity of an operational algorithm for detecting relapse in claims data. A claims-based relapse detection algorithm was tested by comparing its detection rate over a 1-year period with relapses identified based on medical chart review. According to the algorithm, MS patients in a US healthcare claims database who had either (1) a primary claim for MS during hospitalization or (2) a corticosteroid claim following a MS-related outpatient visit were designated as having a relapse. Patient charts were examined for explicit indication of relapse or care suggestive of relapse. Positive and negative predictive values were calculated. Medical charts were reviewed for 300 MS patients, half of whom had a relapse according to the algorithm. The claims-based criteria correctly classified 67.3% of patients with relapses (positive predictive value) and 70.0% of patients without relapses (negative predictive value; kappa 0.373: p value of the operational algorithm. Limitations of the algorithm include lack of differentiation between relapsing-remitting MS and other types, and that it does not incorporate measures of function and disability. The claims-based algorithm appeared to successfully detect moderate-to-severe MS relapse. This validated definition can be applied to future claims-based MS studies.

  1. Development and validation of an algorithm for laser application in wound treatment

    Directory of Open Access Journals (Sweden)

    Diequison Rite da Cunha

    2017-12-01

    Full Text Available ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05. Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962. Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds.

  2. An automated database case definition for serious bleeding related to oral anticoagulant use.

    Science.gov (United States)

    Cunningham, Andrew; Stein, C Michael; Chung, Cecilia P; Daugherty, James R; Smalley, Walter E; Ray, Wayne A

    2011-06-01

    Bleeding complications are a serious adverse effect of medications that prevent abnormal blood clotting. To facilitate epidemiologic investigations of bleeding complications, we developed and validated an automated database case definition for bleeding-related hospitalizations. The case definition utilized information from an in-progress retrospective cohort study of warfarin-related bleeding in Tennessee Medicaid enrollees 30 years of age or older. It identified inpatient stays during the study period of January 1990 to December 2005 with diagnoses and/or procedures that indicated a current episode of bleeding. The definition was validated by medical record review for a sample of 236 hospitalizations. We reviewed 186 hospitalizations that had medical records with sufficient information for adjudication. Of these, 165 (89%, 95%CI: 83-92%) were clinically confirmed bleeding-related hospitalizations. An additional 19 hospitalizations (10%, 7-15%) were adjudicated as possibly bleeding-related. Of the 165 clinically confirmed bleeding-related hospitalizations, the automated database and clinical definitions had concordant anatomical sites (gastrointestinal, cerebral, genitourinary, other) for 163 (99%, 96-100%). For those hospitalizations with sufficient information to distinguish between upper/lower gastrointestinal bleeding, the concordance was 89% (76-96%) for upper gastrointestinal sites and 91% (77-97%) for lower gastrointestinal sites. A case definition for bleeding-related hospitalizations suitable for automated databases had a positive predictive value of between 89% and 99% and could distinguish specific bleeding sites. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Validation of the GCOM-W SCA and JAXA soil moisture algorithms

    Science.gov (United States)

    Satellite-based remote sensing of soil moisture has matured over the past decade as a result of the Global Climate Observing Mission-Water (GCOM-W) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  5. Validation Study of a Predictive Algorithm to Evaluate Opioid Use Disorder in a Primary Care Setting

    Science.gov (United States)

    Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A.

    2017-01-01

    Background: Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm (“profile”) incorporating phenotypic and, more uniquely, genotypic risk factors. Methods and Results: In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. Conclusion: The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes. PMID:28890908

  6. Validation of neural spike sorting algorithms without ground-truth information.

    Science.gov (United States)

    Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F

    2016-05-01

    The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Validating the LASSO algorithm by unmixing spectral signatures in multicolor phantoms

    Science.gov (United States)

    Samarov, Daniel V.; Clarke, Matthew; Lee, Ji Yoon; Allen, David; Litorja, Maritoni; Hwang, Jeeseong

    2012-03-01

    As hyperspectral imaging (HSI) sees increased implementation into the biological and medical elds it becomes increasingly important that the algorithms being used to analyze the corresponding output be validated. While certainly important under any circumstance, as this technology begins to see a transition from benchtop to bedside ensuring that the measurements being given to medical professionals are accurate and reproducible is critical. In order to address these issues work has been done in generating a collection of datasets which could act as a test bed for algorithms validation. Using a microarray spot printer a collection of three food color dyes, acid red 1 (AR), brilliant blue R (BBR) and erioglaucine (EG) are mixed together at dierent concentrations in varying proportions at dierent locations on a microarray chip. With the concentration and mixture proportions known at each location, using HSI an algorithm should in principle, based on estimates of abundances, be able to determine the concentrations and proportions of each dye at each location on the chip. These types of data are particularly important in the context of medical measurements as the resulting estimated abundances will be used to make critical decisions which can have a serious impact on an individual's health. In this paper we present a novel algorithm for processing and analyzing HSI data based on the LASSO algorithm (similar to "basis pursuit"). The LASSO is a statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundances in an HSI scene these so called "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The algorithm we present takes the general framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. We show our algorithm's improvement

  8. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    Science.gov (United States)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V

  9. Case definition and classification of leukodystrophies and leukoencephalopathies

    NARCIS (Netherlands)

    Vanderver, Adeline; Prust, Morgan; Tonduti, Davide; Mochel, Fanny; Hussey, Heather M.; Helman, Guy; Garbern, James; Eichler, Florian; Labauge, Pierre; Aubourg, Patrick; Rodriguez, Diana; Patterson, Marc C.; van Hove, Johan L. K.; Schmidt, Johanna; Wolf, Nicole I.; Boespflug-Tanguy, Odile; Schiffmann, Raphael; van der Knaap, Marjo S.

    2015-01-01

    An approved definition of the term leukodystrophy does not currently exist. The lack of a precise case definition hampers efforts to study the epidemiology and the relevance of genetic white matter disorders to public health. Thirteen experts at multiple institutions participated in iterative

  10. Case definition and classification of leukodystrophies and leukoencephalopathies

    NARCIS (Netherlands)

    Vanderver, A.; Prust, M.; Tonduti, D.; Mochel, F.; Hussey, H.M.; Helman, G.; Garbern, J.; Eichler, F.; Labauge, P.; Aubourg, P.; Rodriguez, D.; Patterson, M.C.; van Hove, J.LK.; Schmidt, J; Wolf, N.I.; Boespflug-Tanguy, O.; Schiffmann, R.; van der Knaap, M.S.

    2015-01-01

    Objective: An approved definition of the term leukodystrophy does not currently exist. The lack of a precise case definition hampers efforts to study the epidemiology and the relevance of genetic white matter disorders to public health. Method: Thirteen experts at multiple institutions participated

  11. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  12. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  13. Dosimetric validation of the anisotropic analytical algorithm for photon dose calculation: fundamental characterization in water

    International Nuclear Information System (INIS)

    Fogliata, Antonella; Nicolini, Giorgia; Vanetti, Eugenio; Clivio, Alessandro; Cozzi, Luca

    2006-01-01

    In July 2005 a new algorithm was released by Varian Medical Systems for the Eclipse planning system and installed in our institute. It is the anisotropic analytical algorithm (AAA) for photon dose calculations, a convolution/superposition model for the first time implemented in a Varian planning system. It was therefore necessary to perform validation studies at different levels with a wide investigation approach. To validate the basic performances of the AAA, a detailed analysis of data computed by the AAA configuration algorithm was carried out and data were compared against measurements. To better appraise the performance of AAA and the capability of its configuration to tailor machine-specific characteristics, data obtained from the pencil beam convolution (PBC) algorithm implemented in Eclipse were also added in the comparison. Since the purpose of the paper is to address the basic performances of the AAA and of its configuration procedures, only data relative to measurements in water will be reported. Validation was carried out for three beams: 6 MV and 15 MV from a Clinac 2100C/D and 6 MV from a Clinac 6EX. Generally AAA calculations reproduced very well measured data, and small deviations were observed, on average, for all the quantities investigated for open and wedged fields. In particular, percentage depth-dose curves showed on average differences between calculation and measurement smaller than 1% or 1 mm, and computed profiles in the flattened region matched measurements with deviations smaller than 1% for all beams, field sizes, depths and wedges. Percentage differences in output factors were observed as small as 1% on average (with a range smaller than ±2%) for all conditions. Additional tests were carried out for enhanced dynamic wedges with results comparable to previous results. The basic dosimetric validation of the AAA was therefore considered satisfactory

  14. Computer-assisted expert case definition in electronic health records.

    Science.gov (United States)

    Walker, Alexander M; Zhou, Xiaofeng; Ananthakrishnan, Ashwin N; Weiss, Lisa S; Shen, Rongjun; Sobel, Rachel E; Bate, Andrew; Reynolds, Robert F

    2016-02-01

    To describe how computer-assisted presentation of case data can lead experts to infer machine-implementable rules for case definition in electronic health records. As an illustration the technique has been applied to obtain a definition of acute liver dysfunction (ALD) in persons with inflammatory bowel disease (IBD). The technique consists of repeatedly sampling new batches of case candidates from an enriched pool of persons meeting presumed minimal inclusion criteria, classifying the candidates by a machine-implementable candidate rule and by a human expert, and then updating the rule so that it captures new distinctions introduced by the expert. Iteration continues until an update results in an acceptably small number of changes to form a final case definition. The technique was applied to structured data and terms derived by natural language processing from text records in 29,336 adults with IBD. Over three rounds the technique led to rules with increasing predictive value, as the experts identified exceptions, and increasing sensitivity, as the experts identified missing inclusion criteria. In the final rule inclusion and exclusion terms were often keyed to an ALD onset date. When compared against clinical review in an independent test round, the derived final case definition had a sensitivity of 92% and a positive predictive value of 79%. An iterative technique of machine-supported expert review can yield a case definition that accommodates available data, incorporates pre-existing medical knowledge, is transparent and is open to continuous improvement. The expert updates to rules may be informative in themselves. In this limited setting, the final case definition for ALD performed better than previous, published attempts using expert definitions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Validity of administrative database code algorithms to identify vascular access placement, surgical revisions, and secondary patency.

    Science.gov (United States)

    Al-Jaishi, Ahmed A; Moist, Louise M; Oliver, Matthew J; Nash, Danielle M; Fleet, Jamie L; Garg, Amit X; Lok, Charmaine E

    2018-03-01

    We assessed the validity of physician billing codes and hospital admission using International Classification of Diseases 10th revision codes to identify vascular access placement, secondary patency, and surgical revisions in administrative data. We included adults (≥18 years) with a vascular access placed between 1 April 2004 and 31 March 2013 at the University Health Network, Toronto. Our reference standard was a prospective vascular access database (VASPRO) that contains information on vascular access type and dates of placement, dates for failure, and any revisions. We used VASPRO to assess the validity of different administrative coding algorithms by calculating the sensitivity, specificity, and positive predictive values of vascular access events. The sensitivity (95% confidence interval) of the best performing algorithm to identify arteriovenous access placement was 86% (83%, 89%) and specificity was 92% (89%, 93%). The corresponding numbers to identify catheter insertion were 84% (82%, 86%) and 84% (80%, 87%), respectively. The sensitivity of the best performing coding algorithm to identify arteriovenous access surgical revisions was 81% (67%, 90%) and specificity was 89% (87%, 90%). The algorithm capturing arteriovenous access placement and catheter insertion had a positive predictive value greater than 90% and arteriovenous access surgical revisions had a positive predictive value of 20%. The duration of arteriovenous access secondary patency was on average 578 (553, 603) days in VASPRO and 555 (530, 580) days in administrative databases. Administrative data algorithms have fair to good operating characteristics to identify vascular access placement and arteriovenous access secondary patency. Low positive predictive values for surgical revisions algorithm suggest that administrative data should only be used to rule out the occurrence of an event.

  16. Enhancement of RWSN Lifetime via Firework Clustering Algorithm Validated by ANN

    Directory of Open Access Journals (Sweden)

    Ahmad Ali

    2018-03-01

    Full Text Available Nowadays, wireless power transfer is ubiquitously used in wireless rechargeable sensor networks (WSNs. Currently, the energy limitation is a grave concern issue for WSNs. However, lifetime enhancement of sensor networks is a challenging task need to be resolved. For addressing this issue, a wireless charging vehicle is an emerging technology to expand the overall network efficiency. The present study focuses on the enhancement of overall network lifetime of the rechargeable wireless sensor network. To resolve the issues mentioned above, we propose swarm intelligence based hard clustering approach using fireworks algorithm with the adaptive transfer function (FWA-ATF. In this work, the virtual clustering method has been applied in the routing process which utilizes the firework optimization algorithm. Still now, an FWA-ATF algorithm yet not applied by any researcher for RWSN. Furthermore, the validation study of the proposed method using the artificial neural network (ANN backpropagation algorithm incorporated in the present study. Different algorithms are applied to evaluate the performance of proposed technique that gives the best results in this mechanism. Numerical results indicate that our method outperforms existing methods and yield performance up to 80% regarding energy consumption and vacation time of wireless charging vehicle.

  17. Pretest probability of a normal echocardiography: validation of a simple and practical algorithm for routine use.

    Science.gov (United States)

    Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard

    2014-02-01

    Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  18. Development and validation of an online interactive, multimedia wound care algorithms program.

    Science.gov (United States)

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide

  19. Detecting free-living steps and walking bouts: validating an algorithm for macro gait analysis.

    Science.gov (United States)

    Hickey, Aodhán; Del Din, Silvia; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Research suggests wearables and not instrumented walkways are better suited to quantify gait outcomes in clinic and free-living environments, providing a more comprehensive overview of walking due to continuous monitoring. Numerous validation studies in controlled settings exist, but few have examined the validity of wearables and associated algorithms for identifying and quantifying step counts and walking bouts in uncontrolled (free-living) environments. Studies which have examined free-living step and bout count validity found limited agreement due to variations in walking speed, changing terrain or task. Here we present a gait segmentation algorithm to define free-living step count and walking bouts from an open-source, high-resolution, accelerometer-based wearable (AX3, Axivity). Ten healthy participants (20-33 years) wore two portable gait measurement systems; a wearable accelerometer on the lower-back and a wearable body-mounted camera (GoPro HERO) on the chest, for 1 h on two separate occasions (24 h apart) during free-living activities. Step count and walking bouts were derived for both measurement systems and compared. For all participants during a total of almost 20 h of uncontrolled and unscripted free-living activity data, excellent relative (rho  ⩾  0.941) and absolute (ICC (2,1)   ⩾  0.975) agreement with no presence of bias were identified for step count compared to the camera (gold standard reference). Walking bout identification showed excellent relative (rho  ⩾  0.909) and absolute agreement (ICC (2,1)   ⩾  0.941) but demonstrated significant bias. The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.

  20. Development of a Gestational Age-Specific Case Definition for Neonatal Necrotizing Enterocolitis.

    Science.gov (United States)

    Battersby, Cheryl; Longford, Nick; Costeloe, Kate; Modi, Neena

    2017-03-01

    Necrotizing enterocolitis (NEC) is a major cause of neonatal morbidity and mortality. Preventive and therapeutic research, surveillance, and quality improvement initiatives are hindered by variations in case definitions. To develop a gestational age (GA)-specific case definition for NEC. We conducted a prospective 34-month population study using clinician-recorded findings from the UK National Neonatal Research Database between December 2011 and September 2014 across all 163 neonatal units in England. We split study data into model development and validation data sets and categorized GA into groups (group 1, less than 26 weeks' GA; group 2, 26 to less than 30 weeks' GA; group 3, 30 to less than 37 weeks' GA; group 4, 37 or more weeks' GA). We entered GA, birth weight z score, and clinical and abdominal radiography findings as candidate variables in a logistic regression model, performed model fitting 1000 times, averaged the predictions, and used estimates from the fitted model to develop an ordinal NEC score and cut points to develop a dichotomous case definition based on the highest area under the receiver operating characteristic curves [AUCs] and positive predictive values [PPVs]. Abdominal radiography performed to investigate clinical concerns. Ordinal NEC likelihood score, dichotomous case definition, and GA-specific probability plots. Of the 3866 infants, the mean (SD) birth weight was 2049.1 (1941.7) g and mean (SD) GA was 32 (5) weeks; 2032 of 3663 (55.5%) were male. The total included 2978 infants (77.0%) without NEC and 888 (23.0%) with NEC. Infants with NEC in group 1 were less likely to present with pneumatosis (31.1% vs 47.2%; P = .01), blood in stool (11.8% vs 29.6%; P definition were 2 or greater for infants in groups 1 and 2, 3 or greater for infants in group 3, and 4 or greater for infants in group 4. The ordinal NEC score and dichotomous case definition discriminated well between infants with (AUC, 87%) and without (AUC, 80%) NEC. The case

  1. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    Science.gov (United States)

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  2. Validation of near infrared satellite based algorithms to relative atmospheric water vapour content over land

    International Nuclear Information System (INIS)

    Serpolla, A.; Bonafoni, S.; Basili, P.; Biondi, R.; Arino, O.

    2009-01-01

    This paper presents the validation results of ENVISAT MERIS and TERRA MODIS retrieval algorithms for atmospheric Water Vapour Content (WVC) estimation in clear sky condition on land. The MERIS algorithms exploits the radiance ratio of the absorbing channel at 900 nm with the almost absorption-free reference at 890 nm, while the MODIS one is based on the ratio of measurements centred at near 0.905, 0.936, and 0.94 μm with atmospheric window reflectance at 0.865 and 1.24 μm. The first test was performed in the Mediterranean area using WVC provided from both ECMWF and AERONET. As a second step, the performances of the algorithms were tested exploiting WVC computed from radio sounding (RAOBs)in the North East Australia. The different comparisons with respect to reference WVC values showed an overestimation of WVC by MODIS (root mean square error percentage greater than 20%) and an acceptable performance of MERIS algorithms (root mean square error percentage around 10%) [it

  3. Validation of asthma recording in electronic health records: a systematic review

    Directory of Open Access Journals (Sweden)

    Nissen F

    2017-12-01

    Full Text Available Francis Nissen,1 Jennifer K Quint,2 Samantha Wilkinson,1 Hana Mullerova,3 Liam Smeeth,1 Ian J Douglas1 1Department of Non-Communicable Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK; 2National Heart and Lung Institute, Imperial College, London, UK; 3RWD & Epidemiology, GSK R&D, Uxbridge, UK Objective: To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background: Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research.Methods: We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV] were summarized in two tables.Results: Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%. Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion: Attaining high PPVs (>80% is possible using each of the discussed validation

  4. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  5. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  6. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  7. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    NARCIS (Netherlands)

    Wognum, S.; Heethuis, S. E.; Rosario, T.; Hoogeman, M. S.; Bel, A.

    2014-01-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations.

  8. Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement (ADVANCE) Technology Development for Resilient Flight Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...

  9. Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes

    Science.gov (United States)

    Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.

    2018-04-01

    To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.

  10. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  11. Ecodriver. D23.1: Report on test scenarios for val-idation of on-line vehicle algorithms

    NARCIS (Netherlands)

    Seewald, P.; Ivens, T.W.T.; Spronkmans, S.

    2014-01-01

    This deliverable provides a description of test scenarios that will be used for validation of WP22’s on-line vehicle algorithms. These algorithms consist of the two modules VE³ (Vehicle Energy and Environment Estimator) and RSG (Reference Signal Genera-tor) and will be tested using the

  12. Sampling algorithms for validation of supervised learning models for Ising-like systems

    Science.gov (United States)

    Portman, Nataliya; Tamblyn, Isaac

    2017-12-01

    In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).

  13. An Automated Defect Prediction Framework using Genetic Algorithms: A Validation of Empirical Studies

    Directory of Open Access Journals (Sweden)

    Juan Murillo-Morera

    2016-05-01

    Full Text Available Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding software practitioners. With timely and accurate defect predictions, practitioners can focus their limited testing resources on higher risk areas. This paper reports the results of three empirical studies that uses an automated genetic defect prediction framework. This framework generates and compares different learning schemes (preprocessing + attribute selection + learning algorithms and selects the best one using a genetic algorithm, with the objective to estimate the defect proneness of a software module. The first empirical study is a performance comparison of our framework with the most important framework of the literature. The second empirical study is a performance and runtime comparison between our framework and an exhaustive framework. The third empirical study is a sensitivity analysis. The last empirical study, is our main contribution in this paper. Performance of the software development defect prediction models (using AUC, Area Under the Curve was validated using NASA-MDP and PROMISE data sets. Seventeen data sets from NASA-MDP (13 and PROMISE (4 projects were analyzed running a NxM-fold cross-validation. A genetic algorithm was used to select the components of the learning schemes automatically, and to assess and report the results. Our results reported similar performance between frameworks. Our framework reported better runtime than exhaustive framework. Finally, we reported the best configuration according to sensitivity analysis.

  14. Validation of coding algorithms for the identification of patients hospitalized for alcoholic hepatitis using administrative data.

    Science.gov (United States)

    Pang, Jack X Q; Ross, Erin; Borman, Meredith A; Zimmer, Scott; Kaplan, Gilaad G; Heitman, Steven J; Swain, Mark G; Burak, Kelly W; Quan, Hude; Myers, Robert P

    2015-09-11

    Epidemiologic studies of alcoholic hepatitis (AH) have been hindered by the lack of a validated International Classification of Disease (ICD) coding algorithm for use with administrative data. Our objective was to validate coding algorithms for AH using a hospitalization database. The Hospital Discharge Abstract Database (DAD) was used to identify consecutive adults (≥18 years) hospitalized in the Calgary region with a diagnosis code for AH (ICD-10, K70.1) between 01/2008 and 08/2012. Medical records were reviewed to confirm the diagnosis of AH, defined as a history of heavy alcohol consumption, elevated AST and/or ALT (34 μmol/L, and elevated INR. Subgroup analyses were performed according to the diagnosis field in which the code was recorded (primary vs. secondary) and AH severity. Algorithms that incorporated ICD-10 codes for cirrhosis and its complications were also examined. Of 228 potential AH cases, 122 patients had confirmed AH, corresponding to a positive predictive value (PPV) of 54% (95% CI 47-60%). PPV improved when AH was the primary versus a secondary diagnosis (67% vs. 21%; P codes for ascites (PPV 75%; 95% CI 63-86%), cirrhosis (PPV 60%; 47-73%), and gastrointestinal hemorrhage (PPV 62%; 51-73%) had improved performance, however, the prevalence of these diagnoses in confirmed AH cases was low (29-39%). In conclusion the low PPV of the diagnosis code for AH suggests that caution is necessary if this hospitalization database is used in large-scale epidemiologic studies of this condition.

  15. Chiari malformation Type I surgery in pediatric patients. Part 1: validation of an ICD-9-CM code search algorithm.

    Science.gov (United States)

    Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2016-05-01

    OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes.

  16. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms: validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip T.; Cronk, Heather Q.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert Y.; Fisher, Brenden; Osterman, Gregory B.; Pollock, Randy H.; Crisp, David; Eldering, Annmarie; Gunson, Michael R.

    2016-03-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols, i.e., contamination, within the instrument's field of view. Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 µm O2 A band, neglecting scattering by clouds and aerosols, which introduce photon path-length differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 µm (weak CO2 band) and 2.06 µm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which are sensitive to different features in the spectra, provides the basis for cloud screening of the OCO-2 data set.To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning of algorithmic threshold parameters that allows for processing of ≃ 20-25 % of all OCO-2 soundings

  17. Proposed clinical case definition for cytomegalovirus-immune recovery retinitis.

    Science.gov (United States)

    Ruiz-Cruz, Matilde; Alvarado-de la Barrera, Claudia; Ablanedo-Terrazas, Yuria; Reyes-Terán, Gustavo

    2014-07-15

    Cytomegalovirus (CMV) retinitis has been extensively described in patients with advanced or late human immunodeficiency virus (HIV) disease under ineffective treatment of opportunistic infection and antiretroviral therapy (ART) failure. However, there is limited information about patients who develop active cytomegalovirus retinitis as an immune reconstitution inflammatory syndrome (IRIS) after successful initiation of ART. Therefore, a case definition of cytomegalovirus-immune recovery retinitis (CMV-IRR) is proposed here. We reviewed medical records of 116 HIV-infected patients with CMV retinitis attending our institution during January 2003-June 2012. We retrospectively studied HIV-infected patients who had CMV retinitis on ART initiation or during the subsequent 6 months. Clinical and immunological characteristics of patients with active CMV retinitis were described. Of the 75 patients under successful ART included in the study, 20 had improvement of CMV retinitis. The remaining 55 patients experienced CMV-IRR; 35 of those developed CMV-IRR after ART initiation (unmasking CMV-IRR) and 20 experienced paradoxical clinical worsening of retinitis (paradoxical CMV-IRR). Nineteen patients with CMV-IRR had a CD4 count of ≥50 cells/µL. Six patients with CMV-IRR subsequently developed immune recovery uveitis. There is no case definition for CMV-IRR, although this condition is likely to occur after successful initiation of ART, even in patients with high CD4 T-cell counts. By consequence, we propose the case definitions for paradoxical and unmasking CMV-IRR. We recommend close follow-up of HIV-infected patients following ART initiation. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Validation of an algorithm-based definition of treatment resistance in patients with schizophrenia.

    Science.gov (United States)

    Ajnakina, Olesya; Horsdal, Henriette Thisted; Lally, John; MacCabe, James H; Murray, Robin M; Gasse, Christiane; Wimberley, Theresa

    2018-02-19

    Large-scale pharmacoepidemiological research on treatment resistance relies on accurate identification of people with treatment-resistant schizophrenia (TRS) based on data that are retrievable from administrative registers. This is usually approached by operationalising clinical treatment guidelines by using prescription and hospital admission information. We examined the accuracy of an algorithm-based definition of TRS based on clozapine prescription and/or meeting algorithm-based eligibility criteria for clozapine against a gold standard definition using case notes. We additionally validated a definition entirely based on clozapine prescription. 139 schizophrenia patients aged 18-65years were followed for a mean of 5years after first presentation to psychiatric services in South-London, UK. The diagnostic accuracy of the algorithm-based measure against the gold standard was measured with sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). A total of 45 (32.4%) schizophrenia patients met the criteria for the gold standard definition of TRS; applying the algorithm-based definition to the same cohort led to 44 (31.7%) patients fulfilling criteria for TRS with sensitivity, specificity, PPV and NPV of 62.2%, 83.0%, 63.6% and 82.1%, respectively. The definition based on lifetime clozapine prescription had sensitivity, specificity, PPV and NPV of 40.0%, 94.7%, 78.3% and 76.7%, respectively. Although a perfect definition of TRS cannot be derived from available prescription and hospital registers, these results indicate that researchers can confidently use registries to identify individuals with TRS for research and clinical practices. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Validation of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm

    Science.gov (United States)

    Lawson, Sara Nicole; Zaluski, Neal; Petrie, Amanda; Arnold, Cathy; Basran, Jenny

    2013-01-01

    ABSTRACT Purpose: To investigate the concurrent validity of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm (FSRA). Method: A total of 29 older adults (mean age 77.7 [SD 4.0] y) residing in an independent-living senior's complex who met inclusion criteria completed a demographic questionnaire and the components of the FSRA and Berg Balance Scale (BBS). The FSRA consists of the Elderly Fall Screening Test (EFST) and the Multi-factor Falls Questionnaire (MFQ); it is designed to categorize individuals into low, moderate, or high fall-risk categories to determine appropriate management pathways. A predictive model for probability of fall risk, based on previous research, was used to determine concurrent validity of the FSRI. Results: The FSRA placed 79% of participants into the low-risk category, whereas the predictive model found the probability of fall risk to range from 0.04 to 0.74, with a mean of 0.35 (SD 0.25). No statistically significant correlation was found between the FSRA and the predictive model for probability of fall risk (Spearman's ρ=0.35, p=0.06). Conclusion: The FSRA lacks concurrent validity relative to to a previously established model of fall risk and appears to over-categorize individuals into the low-risk group. Further research on the FSRA as an adequate tool to screen community-dwelling older adults for fall risk is recommended. PMID:24381379

  20. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms; validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, T. E.; O'Dell, C. W.; Frankenberg, C.; Partain, P.; Cronk, H. Q.; Savtchenko, A.; Nelson, R. R.; Rosenthal, E. J.; Chang, A. Y.; Fisher, B.; Osterman, G.; Pollock, R. H.; Crisp, D.; Eldering, A.; Gunson, M. R.

    2015-12-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols within the instrument's field of view (FOV). Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 μm O2 A-band, neglecting scattering by clouds and aerosols, which introduce photon path-length (PPL) differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 μm (weak CO2 band) and 2.06 μm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which key off of different features in the spectra, provides the basis for cloud screening of the OCO-2 data set. To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning to allow throughputs of ≃ 30 %, agreement between the OCO-2 and MODIS cloud screening methods is found to be

  1. SU-E-T-516: Dosimetric Validation of AcurosXB Algorithm in Comparison with AAA & CCC Algorithms for VMAT Technique.

    Science.gov (United States)

    Kathirvel, M; Subramanian, V Sai; Arun, G; Thirumalaiswamy, S; Ramalingam, K; Kumar, S Ashok; Jagadeesh, K

    2012-06-01

    To dosimetrically validate AcurosXB algorithm for Volumetric Modulated Arc Therapy (VMAT) in comparison with standard clinical Anisotropic Analytic Algorithm(AAA) and Collapsed Cone Convolution(CCC) dose calculation algorithms. AcurosXB dose calculation algorithm is available with Varian Eclipse treatment planning system (V10). It uses grid-based Boltzmann equation solver to predict dose precisely in lesser time. This study was made to realize algorithms ability to predict dose accurately as its delivery for which five clinical cases each of Brain, Head&Neck, Thoracic, Pelvic and SBRT were taken. Verification plans were created on multicube phantom with iMatrixx-2D detector array and then dose prediction was done with AcurosXB, AAA & CCC (COMPASS System) algorithm and the same were delivered onto CLINAC-iX treatment machine. Delivered dose was captured in iMatrixx plane for all 25 plans. Measured dose was taken as reference to quantify the agreement between AcurosXB calculation algorithm against previously validated AAA and CCC algorithm. Gamma evaluation was performed with clinical criteria distance-to-agreement 3&2mm and dose difference 3&2% in omnipro-I'MRT software. Plans were evaluated in terms of correlation coefficient, quantitative area gamma and average gamma. Study shows good agreement between mean correlation 0.9979±0.0012, 0.9984±0.0009 & 0.9979±0.0011 for AAA, CCC & Acuros respectively. Mean area gamma for criteria 3mm/3% was found to be 98.80±1.04, 98.14±2.31, 98.08±2.01 and 2mm/2% was found to be 93.94±3.83, 87.17±10.54 & 92.36±5.46 for AAA, CCC & Acuros respectively. Mean average gamma for 3mm/3% was 0.26±0.07, 0.42±0.08, 0.28±0.09 and 2mm/2% was found to be 0.39±0.10, 0.64±0.11, 0.42±0.13 for AAA, CCC & Acuros respectively. This study demonstrated that the AcurosXB algorithm had a good agreement with the AAA & CCC in terms of dose prediction. In conclusion AcurosXB algorithm provides a valid, accurate and speedy alternative to AAA

  2. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  3. Molecular phylogenetic trees - On the validity of the Goodman-Moore augmentation algorithm

    Science.gov (United States)

    Holmquist, R.

    1979-01-01

    A response is made to the reply of Nei and Tateno (1979) to the letter of Holmquist (1978) supporting the validity of the augmentation algorithm of Moore (1977) in reconstructions of nucleotide substitutions by means of the maximum parsimony principle. It is argued that the overestimation of the augmented numbers of nucleotide substitutions (augmented distances) found by Tateno and Nei (1978) is due to an unrepresentative data sample and that it is only necessary that evolution be stochastically uniform in different regions of the phylogenetic network for the augmentation method to be useful. The importance of the average value of the true distance over all links is explained, and the relative variances of the true and augmented distances are calculated to be almost identical. The effects of topological changes in the phylogenetic tree on the augmented distance and the question of the correctness of ancestral sequences inferred by the method of parsimony are also clarified.

  4. Validation of algorithm used for location of electrodes in CT images

    International Nuclear Information System (INIS)

    Bustos, J; Graffigna, J P; Isoardi, R; Gómez, M E; Romo, R

    2013-01-01

    It has been implement a noninvasive technique to detect and delineate the focus of electric discharge in patients with mono-focal epilepsy. For the detection of these sources it has used electroencephalogram (EEG) with 128 electrodes cap. With EEG data and electrodes position, it is possible locate this focus on MR volumes. The technique locates the electrodes on CT volumes using image processing algorithms to obtain descriptors of electrodes, as centroid, which determines its position in space. Finally these points are transformed into the coordinate space of MR through a registration for a better understanding by the physician. Due to the medical implications of this technique is of utmost importance to validate the results of the detection of electrodes coordinates. For that, this paper present a comparison between the actual values measured physically (measures including electrode size and spatial location) and the values obtained in the processing of CT and MR images

  5. On Federated and Proof Of Validation Based Consensus Algorithms In Blockchain

    Science.gov (United States)

    Ambili, K. N.; Sindhu, M.; Sethumadhavan, M.

    2017-08-01

    Almost all real world activities have been digitized and there are various client server architecture based systems in place to handle them. These are all based on trust on third parties. There is an active attempt to successfully implement blockchain based systems which ensures that the IT systems are immutable, double spending is avoided and cryptographic strength is provided to them. A successful implementation of blockchain as backbone of existing information technology systems is bound to eliminate various types of fraud and ensure quicker delivery of the item on trade. To adapt IT systems to blockchain architecture, an efficient consensus algorithm need to be designed. Blockchain based on proof of work first came up as the backbone of cryptocurrency. After this, several other methods with variety of interesting features have come up. In this paper, we conduct a survey on existing attempts to achieve consensus in block chain. A federated consensus method and a proof of validation method are being compared.

  6. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    Directory of Open Access Journals (Sweden)

    Murray Christopher JL

    2011-08-01

    Full Text Available Abstract Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff, which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science.

  7. Indications for spine surgery: validation of an administrative coding algorithm to classify degenerative diagnoses

    Science.gov (United States)

    Lurie, Jon D.; Tosteson, Anna N.A.; Deyo, Richard A.; Tosteson, Tor; Weinstein, James; Mirza, Sohail K.

    2014-01-01

    Study Design Retrospective analysis of Medicare claims linked to a multi-center clinical trial. Objective The Spine Patient Outcomes Research Trial (SPORT) provided a unique opportunity to examine the validity of a claims-based algorithm for grouping patients by surgical indication. SPORT enrolled patients for lumbar disc herniation, spinal stenosis, and degenerative spondylolisthesis. We compared the surgical indication derived from Medicare claims to that provided by SPORT surgeons, the “gold standard”. Summary of Background Data Administrative data are frequently used to report procedure rates, surgical safety outcomes, and costs in the management of spinal surgery. However, the accuracy of using diagnosis codes to classify patients by surgical indication has not been examined. Methods Medicare claims were link to beneficiaries enrolled in SPORT. The sensitivity and specificity of three claims-based approaches to group patients based on surgical indications were examined: 1) using the first listed diagnosis; 2) using all diagnoses independently; and 3) using a diagnosis hierarchy based on the support for fusion surgery. Results Medicare claims were obtained from 376 SPORT participants, including 21 with disc herniation, 183 with spinal stenosis, and 172 with degenerative spondylolisthesis. The hierarchical coding algorithm was the most accurate approach for classifying patients by surgical indication, with sensitivities of 76.2%, 88.1%, and 84.3% for disc herniation, spinal stenosis, and degenerative spondylolisthesis cohorts, respectively. The specificity was 98.3% for disc herniation, 83.2% for spinal stenosis, and 90.7% for degenerative spondylolisthesis. Misclassifications were primarily due to codes attributing more complex pathology to the case. Conclusion Standardized approaches for using claims data to accurately group patients by surgical indications has widespread interest. We found that a hierarchical coding approach correctly classified over 90

  8. [Development and validation of an algorithm to identify cancer recurrences from hospital data bases].

    Science.gov (United States)

    Manzanares-Laya, S; Burón, A; Murta-Nascimento, C; Servitja, S; Castells, X; Macià, F

    2014-01-01

    Hospital cancer registries and hospital databases are valuable and efficient sources of information for research into cancer recurrences. The aim of this study was to develop and validate algorithms for the detection of breast cancer recurrence. A retrospective observational study was conducted on breast cancer cases from the cancer registry of a third level university hospital diagnosed between 2003 and 2009. Different probable cancer recurrence algorithms were obtained by linking the hospital databases and the construction of several operational definitions, with their corresponding sensitivity, specificity, positive predictive value and negative predictive value. A total of 1,523 patients were diagnosed of breast cancer between 2003 and 2009. A request for bone gammagraphy after 6 months from the first oncological treatment showed the highest sensitivity (53.8%) and negative predictive value (93.8%), and a pathology test after 6 months after the diagnosis showed the highest specificity (93.8%) and negative predictive value (92.6%). The combination of different definitions increased the specificity and the positive predictive value, but decreased the sensitivity. Several diagnostic algorithms were obtained, and the different definitions could be useful depending on the interest and resources of the researcher. A higher positive predictive value could be interesting for a quick estimation of the number of cases, and a higher negative predictive value for a more exact estimation if more resources are available. It is a versatile and adaptable tool for other types of tumors, as well as for the needs of the researcher. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.

  9. Empirical validation of the S-Score algorithm in the analysis of gene expression data

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2006-03-01

    Full Text Available Abstract Background Current methods of analyzing Affymetrix GeneChip® microarray data require the estimation of probe set expression summaries, followed by application of statistical tests to determine which genes are differentially expressed. The S-Score algorithm described by Zhang and colleagues is an alternative method that allows tests of hypotheses directly from probe level data. It is based on an error model in which the detected signal is proportional to the probe pair signal for highly expressed genes, but approaches a background level (rather than 0 for genes with low levels of expression. This model is used to calculate relative change in probe pair intensities that converts probe signals into multiple measurements with equalized errors, which are summed over a probe set to form the S-Score. Assuming no expression differences between chips, the S-Score follows a standard normal distribution, allowing direct tests of hypotheses to be made. Using spike-in and dilution datasets, we validated the S-Score method against comparisons of gene expression utilizing the more recently developed methods RMA, dChip, and MAS5. Results The S-score showed excellent sensitivity and specificity in detecting low-level gene expression changes. Rank ordering of S-Score values more accurately reflected known fold-change values compared to other algorithms. Conclusion The S-score method, utilizing probe level data directly, offers significant advantages over comparisons using only probe set expression summaries.

  10. Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII

    International Nuclear Information System (INIS)

    McKinney, Gregg W.

    2012-01-01

    Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.

  11. Influenza outbreak during Sydney World Youth Day 2008: the utility of laboratory testing and case definitions on mass gathering outbreak containment.

    Directory of Open Access Journals (Sweden)

    Sebastiaan J van Hal

    Full Text Available BACKGROUND: Influenza causes annual epidemics and often results in extensive outbreaks in closed communities. To minimize transmission, a range of interventions have been suggested. For these to be effective, an accurate and timely diagnosis of influenza is required. This is confirmed by a positive laboratory test result in an individual whose symptoms are consistent with a predefined clinical case definition. However, the utility of these clinical case definitions and laboratory testing in mass gathering outbreaks remains unknown. METHODS AND RESULTS: An influenza outbreak was identified during World Youth Day 2008 in Sydney. From the data collected on pilgrims presenting to a single clinic, a Markov model was developed and validated against the actual epidemic curve. Simulations were performed to examine the utility of different clinical case definitions and laboratory testing strategies for containment of influenza outbreaks. Clinical case definitions were found to have the greatest impact on averting further cases with no added benefit when combined with any laboratory test. Although nucleic acid testing (NAT demonstrated higher utility than indirect immunofluorescence antigen or on-site point-of-care testing, this effect was lost when laboratory NAT turnaround times was included. The main benefit of laboratory confirmation was limited to identification of true influenza cases amenable to interventions such as antiviral therapy. CONCLUSIONS: Continuous re-evaluation of case definitions and laboratory testing strategies are essential for effective management of influenza outbreaks during mass gatherings.

  12. Validation of an algorithm for the nonrigid registration of longitudinal breast MR images using realistic phantoms

    Science.gov (United States)

    Li, Xia; Dawant, Benoit M.; Welch, E. Brian; Chakravarthy, A. Bapsi; Xu, Lei; Mayer, Ingrid; Kelley, Mark; Meszoely, Ingrid; Means-Powell, Julie; Gore, John C.; Yankeelov, Thomas E.

    2010-01-01

    Purpose: The authors present a method to validate coregistration of breast magnetic resonance images obtained at multiple time points during the course of treatment. In performing sequential registration of breast images, the effects of patient repositioning, as well as possible changes in tumor shape and volume, must be considered. The authors accomplish this by extending the adaptive bases algorithm (ABA) to include a tumor-volume preserving constraint in the cost function. In this study, the authors evaluate this approach using a novel validation method that simulates not only the bulk deformation associated with breast MR images obtained at different time points, but also the reduction in tumor volume typically observed as a response to neoadjuvant chemotherapy. Methods: For each of the six patients, high-resolution 3D contrast enhanced T1-weighted images were obtained before treatment, after one cycle of chemotherapy and at the conclusion of chemotherapy. To evaluate the effects of decreasing tumor size during the course of therapy, simulations were run in which the tumor in the original images was contracted by 25%, 50%, 75%, and 95%, respectively. The contracted area was then filled using texture from local healthy appearing tissue. Next, to simulate the post-treatment data, the simulated (i.e., contracted tumor) images were coregistered to the experimentally measured post-treatment images using a surface registration. By comparing the deformations generated by the constrained and unconstrained version of ABA, the authors assessed the accuracy of the registration algorithms. The authors also applied the two algorithms on experimental data to study the tumor volume changes, the value of the constraint, and the smoothness of transformations. Results: For the six patient data sets, the average voxel shift error (mean±standard deviation) for the ABA with constraint was 0.45±0.37, 0.97±0.83, 1.43±0.96, and 1.80±1.17 mm for the 25%, 50%, 75%, and 95

  13. GOCI Yonsei Aerosol Retrieval (YAER) algorithm and validation during the DRAGON-NE Asia 2012 campaign

    Science.gov (United States)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.; Song, Chul H.; Lim, Jae-Hyun; Song, Chang-Keun

    2016-04-01

    The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGON-NE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Ångström exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 × AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better

  14. GOCI Yonsei Aerosol Retrieval (YAER) Algorithm and Validation During the DRAGON-NE Asia 2012 Campaign

    Science.gov (United States)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.; hide

    2016-01-01

    The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGONNE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Angstrom exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 x AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better agreement

  15. Clinical validation of a body-fixed 3D accelerometer and algorithm for activity monitoring in orthopaedic patients

    Directory of Open Access Journals (Sweden)

    Matthijs Lipperts

    2017-10-01

    Conclusion: Activity monitoring of orthopaedic patients by counting and timing a large set of relevant daily life events is feasible in a user- and patient-friendly way and at high clinical validity using a generic three-dimensional accelerometer and algorithms based on empirical and physical methods. The algorithms performed well for healthy individuals as well as patients recovering after total joint replacement in a challenging validation set-up. With such a simple and transparent method real-life activity parameters can be collected in orthopaedic practice for diagnostics, treatments, outcome assessment, or biofeedback.

  16. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    Science.gov (United States)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  17. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  18. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  19. An administrative data validation study of the accuracy of algorithms for identifying rheumatoid arthritis: the influence of the reference standard on algorithm performance.

    Science.gov (United States)

    Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen

    2014-06-23

    We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.

  20. Data Mining: Comparing the Empiric CFS to the Canadian ME/CFS Case Definition

    OpenAIRE

    Jason, Leonard A.; Skendrovic, Beth; Furst, Jacob; Brown, Abigail; Weng, Angela; Bronikowski, Christine

    2011-01-01

    This article contrasts two case definitions for Myalgic Encephalomyelitis/chronic fatigue syndrome (ME/CFS). We compared the empiric CFS case definition (Reeves et al., 2005) and the Canadian ME/CFS Clinical case definition (Carruthers et al., 2003) with a sample of individuals with CFS versus those without. Data mining with decision trees was used to identify the best items to identify patients with CFS. Data mining is a statistical technique that was used to help determine which of the surv...

  1. A Case Definition for Children with Myalgic Encephalomyelitis/Chronic Fatigue Syndrome

    OpenAIRE

    Leonard A. Jason; Nicole Porter; Elizabeth Shelleby; David S. Bell; Charles W. Lapp; Kathy Rowe; Kenny De Meirleir

    2008-01-01

    The case definition for chronic fatigue syndrome was developed for adults (Fukuda et al. 1994), and this case definition may not be appropriate for use with children and adolescents. The lack of application of a consistent pediatric definition for this illness and the lack of a reliable instrument to assess it might lead to studies which lack sensitivity and specificity. In this article, a case definition is presented that has been endorsed by the International Association of ME/CFS.

  2. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers.

    Science.gov (United States)

    Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A

    2014-07-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on

  3. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    International Nuclear Information System (INIS)

    Wognum, S.; Heethuis, S. E.; Bel, A.; Rosario, T.; Hoogeman, M. S.

    2014-01-01

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  4. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S., E-mail: s.wognum@gmail.com; Heethuis, S. E.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Rosario, T. [Department of Radiation Oncology, VU University Medical Center, De Boelelaan 1117, 1081 HZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2014-07-15

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  5. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  6. Validation of case-finding algorithms derived from administrative data for identifying adults living with human immunodeficiency virus infection.

    Directory of Open Access Journals (Sweden)

    Tony Antoniou

    Full Text Available OBJECTIVE: We sought to validate a case-finding algorithm for human immunodeficiency virus (HIV infection using administrative health databases in Ontario, Canada. METHODS: We constructed 48 case-finding algorithms using combinations of physician billing claims, hospital and emergency room separations and prescription drug claims. We determined the test characteristics of each algorithm over various time frames for identifying HIV infection, using data abstracted from the charts of 2,040 randomly selected patients receiving care at two medical practices in Toronto, Ontario as the reference standard. RESULTS: With the exception of algorithms using only a single physician claim, the specificity of all algorithms exceeded 99%. An algorithm consisting of three physician claims over a three year period had a sensitivity and specificity of 96.2% (95% CI 95.2%-97.9% and 99.6% (95% CI 99.1%-99.8%, respectively. Application of the algorithm to the province of Ontario identified 12,179 HIV-infected patients in care for the period spanning April 1, 2007 to March 31, 2009. CONCLUSIONS: Case-finding algorithms generated from administrative data can accurately identify adults living with HIV. A relatively simple "3 claims in 3 years" definition can be used for assembling a population-based cohort and facilitating future research examining trends in health service use and outcomes among HIV-infected adults in Ontario.

  7. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    Science.gov (United States)

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  8. Zero-G experimental validation of a robotics-based inertia identification algorithm

    Science.gov (United States)

    Bruggemann, Jeremy J.; Ferrel, Ivann; Martinez, Gerardo; Xie, Pu; Ma, Ou

    2010-04-01

    The need to efficiently identify the changing inertial properties of on-orbit spacecraft is becoming more critical as satellite on-orbit services, such as refueling and repairing, become increasingly aggressive and complex. This need stems from the fact that a spacecraft's control system relies on the knowledge of the spacecraft's inertia parameters. However, the inertia parameters may change during flight for reasons such as fuel usage, payload deployment or retrieval, and docking/capturing operations. New Mexico State University's Dynamics, Controls, and Robotics Research Group has proposed a robotics-based method of identifying unknown spacecraft inertia properties1. Previous methods require firing known thrusts then measuring the thrust, and the velocity and acceleration changes. The new method utilizes the concept of momentum conservation, while employing a robotic device powered by renewable energy to excite the state of the satellite. Thus, it requires no fuel usage or force and acceleration measurements. The method has been well studied in theory and demonstrated by simulation. However its experimental validation is challenging because a 6- degree-of-freedom motion in a zero-gravity condition is required. This paper presents an on-going effort to test the inertia identification method onboard the NASA zero-G aircraft. The design and capability of the test unit will be discussed in addition to the flight data. This paper also introduces the design and development of an airbearing based test used to partially validate the method, in addition to the approach used to obtain reference value for the test system's inertia parameters that can be used for comparison with the algorithm results.

  9. Derivation and validation of the automated search algorithms to identify cognitive impairment and dementia in electronic health records.

    Science.gov (United States)

    Amra, Sakusic; O'Horo, John C; Singh, Tarun D; Wilson, Gregory A; Kashyap, Rahul; Petersen, Ronald; Roberts, Rosebud O; Fryer, John D; Rabinstein, Alejandro A; Gajic, Ognjen

    2017-02-01

    Long-term cognitive impairment is a common and important problem in survivors of critical illness. We developed electronic search algorithms to identify cognitive impairment and dementia from the electronic medical records (EMRs) that provide opportunity for big data analysis. Eligible patients met 2 criteria. First, they had a formal cognitive evaluation by The Mayo Clinic Study of Aging. Second, they were hospitalized in intensive care unit at our institution between 2006 and 2014. The "criterion standard" for diagnosis was formal cognitive evaluation supplemented by input from an expert neurologist. Using all available EMR data, we developed and improved our algorithms in the derivation cohort and validated them in the independent validation cohort. Of 993 participants who underwent formal cognitive testing and were hospitalized in intensive care unit, we selected 151 participants at random to form the derivation and validation cohorts. The automated electronic search algorithm for cognitive impairment was 94.3% sensitive and 93.0% specific. The search algorithms for dementia achieved respective sensitivity and specificity of 97% and 99%. EMR search algorithms significantly outperformed International Classification of Diseases codes. Automated EMR data extractions for cognitive impairment and dementia are reliable and accurate and can serve as acceptable and efficient alternatives to time-consuming manual data review. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Data Retrieval Algorithms for Validating the Optical Transient Detector and the Lightning Imaging Sensor

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    2000-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.

  11. SU-F-T-431: Dosimetric Validation of Acuros XB Algorithm for Photon Dose Calculation in Water

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, L [Rajiv Gandhi Cancer Institute & Research Center, New Delhi, Delhi (India); Yadav, G; Kishore, V [Bundelkhand Institute of Engineering & Technology, Jhansi, Uttar pradesh (India); Bhushan, M; Samuvel, K; Suhail, M [Rajiv Gandhi Cancer Institute and Research Centre, New Delhi, Delhi (India)

    2016-06-15

    Purpose: To validate the Acuros XB algorithm implemented in Eclipse Treatment planning system version 11 (Varian Medical System, Inc., Palo Alto, CA, USA) for photon dose calculation. Methods: Acuros XB is a Linear Boltzmann transport equation (LBTE) solver that solves LBTE equation explicitly and gives result equivalent to Monte Carlo. 6MV photon beam from Varian Clinac-iX (2300CD) was used for dosimetric validation of Acuros XB. Percentage depth dose (PDD) and profiles (at dmax, 5, 10, 20 and 30 cm) measurements were performed in water for field size ranging from 2×2,4×4, 6×6, 10×10, 20×20, 30×30 and 40×40 cm{sup 2}. Acuros XB results were compared against measurements and anisotropic analytical algorithm (AAA) algorithm. Results: Acuros XB result shows good agreement with measurements, and were comparable to AAA algorithm. Result for PDD and profiles shows less than one percent difference from measurements, and from calculated PDD and profiles by AAA algorithm for all field size. TPS calculated Gamma error histogram values, average gamma errors in PDD curves before dmax and after dmax were 0.28, 0.15 for Acuros XB and 0.24, 0.17 for AAA respectively, average gamma error in profile curves in central region, penumbra region and outside field region were 0.17, 0.21, 0.42 for Acuros XB and 0.10, 0.22, 0.35 for AAA respectively. Conclusion: The dosimetric validation of Acuros XB algorithms in water medium was satisfactory. Acuros XB algorithm has potential to perform photon dose calculation with high accuracy, which is more desirable for modern radiotherapy environment.

  12. Evaluation of the WHO clinical case definition for pediatric HIV infection in Bloemfontein, South Africa.

    Science.gov (United States)

    van Gend, Christine L; Haadsma, Maaike L; Sauer, Pieter J J; Schoeman, Cornelius J

    2003-06-01

    The WHO clinical case definition for pediatric HIV infection has been designed to be used in countries where diagnostic laboratory resources are limited. We evaluated the WHO case definition to determine whether it is a useful instrument to discriminate between HIV-positive and HIV-negative children. In addition, clinical features not included in this case definition were recorded. We recorded clinical data from 300 consecutively admitted children in a state hospital in Bloemfontein, South Africa, and tested these children for HIV infection. A total of 222 children were included in the study; 69 children (31.1 per cent) were HIV positive. The sensitivity of the WHO case definition in this study was 14.5 per cent, the specificity was 98.6 per cent. Apart from weight loss and generalized dermatitis, the signs of the WHO case definition were significantly more often seen in HIV-positive than in HIV-negative children. Of the clinical signs not included in the WHO case definition, marasmus and hepatosplenomegaly especially occurred more frequently in HIV-positive children. Based on these findings we composed a new case definition consisting of four signs: marasmus, hepatosplenomegaly, oropharyngeal candidiasis, and generalized lymphadenopathy. HIV infection is suspected in a child presenting with at least two of these four signs. The sensitivity of this case definition was 63.2 per cent, the specificity was 96.0 per cent. We conclude that in this study the WHO case definition was not a useful instrument to discriminate between HIV-positive and HIV-negative children, mainly because its sensitivity was strikingly low. The simplified case definition we propose, proved to be more sensitive than the WHO case definition (63.2 vs. 14.5 per cent), whilst its specificity remained high.

  13. Assessment of severe malaria in a multicenter, phase III, RTS, S/AS01 malaria candidate vaccine trial: case definition, standardization of data collection and patient care.

    Science.gov (United States)

    Vekemans, Johan; Marsh, Kevin; Greenwood, Brian; Leach, Amanda; Kabore, William; Soulanoudjingar, Solange; Asante, Kwaku Poku; Ansong, Daniel; Evans, Jennifer; Sacarlal, Jahit; Bejon, Philip; Kamthunzi, Portia; Salim, Nahya; Njuguna, Patricia; Hamel, Mary J; Otieno, Walter; Gesase, Samwel; Schellenberg, David

    2011-08-04

    An effective malaria vaccine, deployed in conjunction with other malaria interventions, is likely to substantially reduce the malaria burden. Efficacy against severe malaria will be a key driver for decisions on implementation. An initial study of an RTS, S vaccine candidate showed promising efficacy against severe malaria in children in Mozambique. Further evidence of its protective efficacy will be gained in a pivotal, multi-centre, phase III study. This paper describes the case definitions of severe malaria used in this study and the programme for standardized assessment of severe malaria according to the case definition. Case definitions of severe malaria were developed from a literature review and a consensus meeting of expert consultants and the RTS, S Clinical Trial Partnership Committee, in collaboration with the World Health Organization and the Malaria Clinical Trials Alliance. The same groups, with input from an Independent Data Monitoring Committee, developed and implemented a programme for standardized data collection.The case definitions developed reflect the typical presentations of severe malaria in African hospitals. Markers of disease severity were chosen on the basis of their association with poor outcome, occurrence in a significant proportion of cases and on an ability to standardize their measurement across research centres. For the primary case definition, one or more clinical and/or laboratory markers of disease severity have to be present, four major co-morbidities (pneumonia, meningitis, bacteraemia or gastroenteritis with severe dehydration) are excluded, and a Plasmodium falciparum parasite density threshold is introduced, in order to maximize the specificity of the case definition. Secondary case definitions allow inclusion of co-morbidities and/or allow for the presence of parasitaemia at any density. The programmatic implementation of standardized case assessment included a clinical algorithm for evaluating seriously sick children

  14. Ultrasonic particle image velocimetry for improved flow gradient imaging: algorithms, methodology and validation

    International Nuclear Information System (INIS)

    Niu Lili; Qian Ming; Yu Wentao; Jin Qiaofeng; Ling Tao; Zheng Hairong; Wan Kun; Gao Shen

    2010-01-01

    This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.

  15. Using internal evaluation measures to validate the quality of diverse stream clustering algorithms

    NARCIS (Netherlands)

    Hassani, M.; Seidl, T.

    2017-01-01

    Measuring the quality of a clustering algorithm has shown to be as important as the algorithm itself. It is a crucial part of choosing the clustering algorithm that performs best for an input data. Streaming input data have many features that make them much more challenging than static ones. They

  16. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi.

    Science.gov (United States)

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-07-01

    Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Hantavirus pulmonary syndrome clinical findings: evaluating a surveillance case definition.

    Science.gov (United States)

    Knust, Barbara; Macneil, Adam; Rollin, Pierre E

    2012-05-01

    Clinical cases of hantavirus pulmonary syndrome (HPS) can be challenging to differentiate from other acute respiratory diseases, which can lead to delays in diagnosis, treatment, and disease reporting. Rapid onset of severe disease occurs, at times before diagnostic test results are available. This study's objective was to examine the clinical characteristics of patients that would indicate HPS to aid in detection and reporting. Test results of blood samples from U.S. patients suspected of having HPS submitted to the Centers for Disease Control and Prevention from 1998-2010 were reviewed. Patient information collected by case report forms was compared between HPS-confirmed and test-negative patients. Diagnostic sensitivity, specificity, predictive values, and likelihood ratios were calculated for individual clinical findings and combinations of variables. Of 567 patients included, 36% were HPS-confirmed. Thrombocytopenia, chest x-rays with suggestive signs, and receiving supplemental oxygenation were highly sensitive (>95%), while elevated hematocrit was highly specific (83%) in detecting HPS. Combinations that maximized sensitivity required the presence of thrombocytopenia. Using a national sample of suspect patients, we found that thrombocytopenia was a highly sensitive indicator of HPS and should be included in surveillance definitions for suspected HPS. Using a sensitive suspect case definition to identify potential HPS patients that are confirmed by highly specific diagnostic testing will ensure accurate reporting of this disease.

  18. Severe versus Moderate Criteria for the New Pediatric Case Definition for ME/CFS

    Science.gov (United States)

    Jason, Leonard; Porter, Nicole; Shelleby, Elizabeth; Till, Lindsay; Bell, David S.; Lapp, Charles W.; Rowe, Kathy; De Meirleir, Kenny

    2009-01-01

    The new diagnostic criteria for pediatric ME/CFS are structurally based on the Canadian Clinical Adult case definition, and have more required specific symptoms than the (Fukuda et al. Ann Intern Med 121:953-959, 1994) adult case definition. Physicians specializing in pediatric ME/CFS referred thirty-three pediatric patients with ME/CFS and 21…

  19. Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.

    Science.gov (United States)

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2016-02-01

    Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.

  20. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    Science.gov (United States)

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  1. Reliability of case definitions for public health surveillance assessed by Round-Robin test methodology

    Directory of Open Access Journals (Sweden)

    Claus Hermann

    2006-05-01

    Full Text Available Abstract Background Case definitions have been recognized to be important elements of public health surveillance systems. They are to assure comparability and consistency of surveillance data and have crucial impact on the sensitivity and the positive predictive value of a surveillance system. The reliability of case definitions has rarely been investigated systematically. Methods We conducted a Round-Robin test by asking all 425 local health departments (LHD and the 16 state health departments (SHD in Germany to classify a selection of 68 case examples using case definitions. By multivariate analysis we investigated factors linked to classification agreement with a gold standard, which was defined by an expert panel. Results A total of 7870 classifications were done by 396 LHD (93% and all SHD. Reporting sensitivity was 90.0%, positive predictive value 76.6%. Polio case examples had the lowest reporting precision, salmonellosis case examples the highest (OR = 0.008; CI: 0.005–0.013. Case definitions with a check-list format of clinical criteria resulted in higher reporting precision than case definitions with a narrative description (OR = 3.08; CI: 2.47–3.83. Reporting precision was higher among SHD compared to LHD (OR = 1.52; CI: 1.14–2.02. Conclusion Our findings led to a systematic revision of the German case definitions and build the basis for general recommendations for the creation of case definitions. These include, among others, that testable yes/no criteria in a check-list format is likely to improve reliability, and that software used for data transmission should be designed in strict accordance with the case definitions. The findings of this study are largely applicable to case definitions in many other countries or international networks as they share the same structural and editorial characteristics of the case definitions evaluated in this study before their revision.

  2. Case definitions for human poisonings postulated to palytoxins exposure.

    Science.gov (United States)

    Tubaro, A; Durando, P; Del Favero, G; Ansaldi, F; Icardi, G; Deeds, J R; Sosa, S

    2011-03-01

    A series of case reports and anecdotal references describe the adverse effects on human health ascribed to the marine toxin palytoxin (PLTX) after different exposure routes. They include poisonings after oral intake of contaminated seafood, but also inhalation and cutaneous/systemic exposures after direct contact with aerosolized seawater during Ostreopsis blooms and/or through maintaining aquaria containing cnidarian zoanthids. The symptoms commonly recorded during PLTX intoxication are general malaise and weakness, associated with myalgia, respiratory effects, impairment of the neuromuscular apparatus and abnormalities in cardiac function. Systemic symptoms are often recorded together with local damages whose intensity varies according to the route and length of exposure. Gastrointestinal malaise or respiratory distress is common for oral and inhalational exposure, respectively. In addition, irritant properties of PLTX probably account for the inflammatory reactions typical of cutaneous and inhalational contact. Unfortunately, the toxin identification and/or quantification are often incomplete or missing and cases of poisoning are indirectly ascribed to PLTXs, according only to symptoms, anamnesis and environmental/epidemiological investigations (i.e. zoanthid handling or ingestion of particular seafood). Based on the available literature, we suggest a "case definition of PLTX poisonings" according to the main exposure routes, and, we propose the main symptoms to be checked, as well as, hemato-clinical analysis to be carried out. We also suggest the performance of specific analyses both on biological specimens of patients, as well as, on the contaminated materials responsible for the poisoning. A standardized protocol for data collection could provide a more rapid and reliable diagnosis of palytoxin-poisoning, but also the collection of necessary data for the risk assessment for this family of toxins. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Development and validation of a prediction algorithm for the onset of common mental disorders in a working population.

    Science.gov (United States)

    Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas

    2018-01-01

    Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.

  4. Development and validation of a risk prediction algorithm for the recurrence of suicidal ideation among general population with low mood.

    Science.gov (United States)

    Liu, Y; Sareen, J; Bolton, J M; Wang, J L

    2016-03-15

    Suicidal ideation is one of the strongest predictors of recent and future suicide attempt. This study aimed to develop and validate a risk prediction algorithm for the recurrence of suicidal ideation among population with low mood 3035 participants from U.S National Epidemiologic Survey on Alcohol and Related Conditions with suicidal ideation at their lowest mood at baseline were included. The Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria was used. Logistic regression modeling was conducted to derive the algorithm. Discrimination and calibration were assessed in the development and validation cohorts. In the development data, the proportion of recurrent suicidal ideation over 3 years was 19.5 (95% CI: 17.7, 21.5). The developed algorithm consisted of 6 predictors: age, feelings of emptiness, sudden mood changes, self-harm history, depressed mood in the past 4 weeks, interference with social activities in the past 4 weeks because of physical health or emotional problems and emptiness was the most important risk factor. The model had good discriminative power (C statistic=0.8273, 95% CI: 0.8027, 0.8520). The C statistic was 0.8091 (95% CI: 0.7786, 0.8395) in the external validation dataset and was 0.8193 (95% CI: 0.8001, 0.8385) in the combined dataset. This study does not apply to people with suicidal ideation who are not depressed. The developed risk algorithm for predicting the recurrence of suicidal ideation has good discrimination and excellent calibration. Clinicians can use this algorithm to stratify the risk of recurrence in patients and thus improve personalized treatment approaches, make advice and further intensive monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Controlling for Frailty in Pharmacoepidemiologic Studies of Older Adults: Validation of an Existing Medicare Claims-based Algorithm.

    Science.gov (United States)

    Cuthbertson, Carmen C; Kucharska-Newton, Anna; Faurot, Keturah R; Stürmer, Til; Jonsson Funk, Michele; Palta, Priya; Windham, B Gwen; Thai, Sydney; Lund, Jennifer L

    2018-07-01

    Frailty is a geriatric syndrome characterized by weakness and weight loss and is associated with adverse health outcomes. It is often an unmeasured confounder in pharmacoepidemiologic and comparative effectiveness studies using administrative claims data. Among the Atherosclerosis Risk in Communities (ARIC) Study Visit 5 participants (2011-2013; n = 3,146), we conducted a validation study to compare a Medicare claims-based algorithm of dependency in activities of daily living (or dependency) developed as a proxy for frailty with a reference standard measure of phenotypic frailty. We applied the algorithm to the ARIC participants' claims data to generate a predicted probability of dependency. Using the claims-based algorithm, we estimated the C-statistic for predicting phenotypic frailty. We further categorized participants by their predicted probability of dependency (<5%, 5% to <20%, and ≥20%) and estimated associations with difficulties in physical abilities, falls, and mortality. The claims-based algorithm showed good discrimination of phenotypic frailty (C-statistic = 0.71; 95% confidence interval [CI] = 0.67, 0.74). Participants classified with a high predicted probability of dependency (≥20%) had higher prevalence of falls and difficulty in physical ability, and a greater risk of 1-year all-cause mortality (hazard ratio = 5.7 [95% CI = 2.5, 13]) than participants classified with a low predicted probability (<5%). Sensitivity and specificity varied across predicted probability of dependency thresholds. The Medicare claims-based algorithm showed good discrimination of phenotypic frailty and high predictive ability with adverse health outcomes. This algorithm can be used in future Medicare claims analyses to reduce confounding by frailty and improve study validity.

  6. Validation and Application of the Modified Satellite-Based Priestley-Taylor Algorithm for Mapping Terrestrial Evapotranspiration

    Directory of Open Access Journals (Sweden)

    Yunjun Yao

    2014-01-01

    Full Text Available Satellite-based vegetation indices (VIs and Apparent Thermal Inertia (ATI derived from temperature change provide valuable information for estimating evapotranspiration (LE and detecting the onset and severity of drought. The modified satellite-based Priestley-Taylor (MS-PT algorithm that we developed earlier, coupling both VI and ATI, is validated based on observed data from 40 flux towers distributed across the world on all continents. The validation results illustrate that the daily LE can be estimated with the Root Mean Square Error (RMSE varying from 10.7 W/m2 to 87.6 W/m2, and with the square of correlation coefficient (R2 from 0.41 to 0.89 (p < 0.01. Compared with the Priestley-Taylor-based LE (PT-JPL algorithm, the MS-PT algorithm improves the LE estimates at most flux tower sites. Importantly, the MS-PT algorithm is also satisfactory in reproducing the inter-annual variability at flux tower sites with at least five years of data. The R2 between measured and predicted annual LE anomalies is 0.42 (p = 0.02. The MS-PT algorithm is then applied to detect the variations of long-term terrestrial LE over Three-North Shelter Forest Region of China and to monitor global land surface drought. The MS-PT algorithm described here demonstrates the ability to map regional terrestrial LE and identify global soil moisture stress, without requiring precipitation information.

  7. Optimization of the GSFC TROPOZ DIAL retrieval using synthetic lidar returns and ozonesondes - Part 1: Algorithm validation

    Science.gov (United States)

    Sullivan, J. T.; McGee, T. J.; Leblanc, T.; Sumnicht, G. K.; Twigg, L. W.

    2015-10-01

    The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL) is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical ozone concentrations and ozone layers aloft, especially during air quality episodes. For these reasons, this paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and confirm that it is properly representing ozone concentrations. This paper is focused on ensuring the TROPOZ algorithm is properly quantifying ozone concentrations, and a following paper will focus on a systematic uncertainty analysis. This methodology begins by simulating synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile. This was then systematically performed to identify any areas that need refinement for a new operational version of the TROPOZ retrieval algorithm. One immediate outcome of this exercise was that a bin registration error in the correction for detector saturation within the original retrieval was discovered and was subsequently corrected for. Another noticeable outcome was that the vertical smoothing in the retrieval algorithm was upgraded from a constant vertical resolution to a variable vertical resolution to yield a statistical uncertainty of exercise was quite successful.

  8. Development and validation of algorithms to differentiate ductal carcinoma in situ from invasive breast cancer within administrative claims data.

    Science.gov (United States)

    Hirth, Jacqueline M; Hatch, Sandra S; Lin, Yu-Li; Giordano, Sharon H; Silva, H Colleen; Kuo, Yong-Fang

    2018-04-18

    Overtreatment is a common concern for patients with ductal carcinoma in situ (DCIS), but this entity is difficult to distinguish from invasive breast cancers in administrative claims data sets because DCIS often is coded as invasive breast cancer. Therefore, the authors developed and validated algorithms to select DCIS cases from administrative claims data to enable outcomes research in this type of data. This retrospective cohort using invasive breast cancer and DCIS cases included women aged 66 to 70 years in the 2004 through 2011 Texas Cancer Registry (TCR) data linked to Medicare administrative claims data. TCR records were used as "gold" standards to evaluate the sensitivity, specificity, and positive predictive value (PPV) of 2 algorithms. Women with a biopsy enrolled in Medicare parts A and B at 12 months before and 6 months after their first biopsy without a second incident diagnosis of DCIS or invasive breast cancer within 12 months in the TCR were included. Women in 2010 Medicare data were selected to test the algorithms in a general sample. In the TCR data set, a total of 6907 cases met inclusion criteria, with 1244 DCIS cases. The first algorithm had a sensitivity of 79%, a specificity of 89%, and a PPV of 62%. The second algorithm had a sensitivity of 50%, a specificity of 97%. and a PPV of 77%. Among women in the general sample, the specificity was high and the sensitivity was similar for both algorithms. However, the PPV was approximately 6% to 7% lower. DCIS frequently is miscoded as invasive breast cancer, and thus the proposed algorithms are useful to examine DCIS outcomes using data sets not linked to cancer registries. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.

  9. Validation of Material Algorithms for Femur Remodelling Using Medical Image Data

    Directory of Open Access Journals (Sweden)

    Shitong Luo

    2017-01-01

    Full Text Available The aim of this study is the utilization of human medical CT images to quantitatively evaluate two sorts of “error-driven” material algorithms, that is, the isotropic and orthotropic algorithms, for bone remodelling. The bone remodelling simulations were implemented by a combination of the finite element (FE method and the material algorithms, in which the bone material properties and element axes are determined by both loading amplitudes and daily cycles with different weight factor. The simulation results showed that both algorithms produced realistic distribution in bone amount, when compared with the standard from CT data. Moreover, the simulated L-T ratios (the ratio of longitude modulus to transverse modulus by the orthotropic algorithm were close to the reported results. This study suggests a role for “error-driven” algorithm in bone material prediction in abnormal mechanical environment and holds promise for optimizing implant design as well as developing countermeasures against bone loss due to weightlessness. Furthermore, the quantified methods used in this study can enhance bone remodelling model by optimizing model parameters to gap the discrepancy between the simulation and real data.

  10. Sensitivity and Specificity of Suspected Case Definition Used during West Africa Ebola Epidemic.

    Science.gov (United States)

    Hsu, Christopher H; Champaloux, Steven W; Keïta, Sakoba; Martel, Lise; Bilivogui, Pepe; Knust, Barbara; McCollum, Andrea M

    2018-01-01

    Rapid early detection and control of Ebola virus disease (EVD) is contingent on accurate case definitions. Using an epidemic surveillance dataset from Guinea, we analyzed an EVD case definition developed by the World Health Organization (WHO) and used in Guinea. We used the surveillance dataset (March-October 2014; n = 2,847 persons) to identify patients who satisfied or did not satisfy case definition criteria. Laboratory confirmation determined cases from noncases, and we calculated sensitivity, specificity and predictive values. The sensitivity of the defintion was 68.9%, and the specificity of the definition was 49.6%. The presence of epidemiologic risk factors (i.e., recent contact with a known or suspected EVD case-patient) had the highest sensitivity (74.7%), and unexplained deaths had the highest specificity (92.8%). Results for case definition analyses were statistically significant (pdefinition used in Guinea contributed to improved overall sensitivity and specificity.

  11. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  12. Wind turbines and health: An examination of a proposed case definition

    Directory of Open Access Journals (Sweden)

    Robert J McCunney

    2015-01-01

    Full Text Available Renewable energy demands have increased the need for new wind farms. In turn, concerns have been raised about potential adverse health effects on nearby residents. A case definition has been proposed to diagnose "Adverse Health Effects in the Environs of Industrial Wind Turbines" (AHE/IWT; initially in 2011 and then with an update in 2014. The authors invited commentary and in turn, we assessed its scientific merits by quantitatively evaluating its proposed application. We used binomial coefficients to quantitatively assess the potential of obtaining a diagnosis of AHE/IWT. We also reviewed the methodology and process of the development of the case definition by contrasting it with guidelines on case definition criteria of the USA Institute of Medicine. The case definition allows at least 3,264 and up to 400,000 possibilities for meeting second- and third-order criteria, once the limited first-order criteria are met. IOM guidelines for clinical case definitions were not followed. The case definition has virtually no specificity and lacks scientific support from peer-reviewed literature. If applied as proposed, its application will lead to substantial potential for false-positive assessments and missed diagnoses. Virtually any new illness that develops or any prevalent illness that worsens after the installation of wind turbines within 10 km of a residence could be considered AHE/IWT if the patient feels better away from home. The use of this case definition in the absence of a thorough medical evaluation with appropriate diagnostic studies poses risks to patients in that treatable disorders would be overlooked. The case definition has significant potential to mislead patients and its use cannot be recommended for application in any health-care or decision-making setting

  13. Wind turbines and health: An examination of a proposed case definition.

    Science.gov (United States)

    McCunney, Robert J; Morfeld, Peter; Colby, W David; Mundt, Kenneth A

    2015-01-01

    Renewable energy demands have increased the need for new wind farms. In turn, concerns have been raised about potential adverse health effects on nearby residents. A case definition has been proposed to diagnose "Adverse Health Effects in the Environs of Industrial Wind Turbines" (AHE/IWT); initially in 2011 and then with an update in 2014. The authors invited commentary and in turn, we assessed its scientific merits by quantitatively evaluating its proposed application. We used binomial coefficients to quantitatively assess the potential of obtaining a diagnosis of AHE/IWT. We also reviewed the methodology and process of the development of the case definition by contrasting it with guidelines on case definition criteria of the USA Institute of Medicine. The case definition allows at least 3,264 and up to 400,000 possibilities for meeting second- and third-order criteria, once the limited first-order criteria are met. IOM guidelines for clinical case definitions were not followed. The case definition has virtually no specificity and lacks scientific support from peer-reviewed literature. If applied as proposed, its application will lead to substantial potential for false-positive assessments and missed diagnoses. Virtually any new illness that develops or any prevalent illness that worsens after the installation of wind turbines within 10 km of a residence could be considered AHE/IWT if the patient feels better away from home. The use of this case definition in the absence of a thorough medical evaluation with appropriate diagnostic studies poses risks to patients in that treatable disorders would be overlooked. The case definition has significant potential to mislead patients and its use cannot be recommended for application in any health-care or decision-making setting.

  14. Wind turbines and health: An examination of a proposed case definition

    Science.gov (United States)

    McCunney, Robert J.; Morfeld, Peter; Colby, W. David; Mundt, Kenneth A.

    2015-01-01

    Renewable energy demands have increased the need for new wind farms. In turn, concerns have been raised about potential adverse health effects on nearby residents. A case definition has been proposed to diagnose “Adverse Health Effects in the Environs of Industrial Wind Turbines” (AHE/IWT); initially in 2011 and then with an update in 2014. The authors invited commentary and in turn, we assessed its scientific merits by quantitatively evaluating its proposed application. We used binomial coefficients to quantitatively assess the potential of obtaining a diagnosis of AHE/IWT. We also reviewed the methodology and process of the development of the case definition by contrasting it with guidelines on case definition criteria of the USA Institute of Medicine. The case definition allows at least 3,264 and up to 400,000 possibilities for meeting second- and third-order criteria, once the limited first-order criteria are met. IOM guidelines for clinical case definitions were not followed. The case definition has virtually no specificity and lacks scientific support from peer-reviewed literature. If applied as proposed, its application will lead to substantial potential for false-positive assessments and missed diagnoses. Virtually any new illness that develops or any prevalent illness that worsens after the installation of wind turbines within 10 km of a residence could be considered AHE/IWT if the patient feels better away from home. The use of this case definition in the absence of a thorough medical evaluation with appropriate diagnostic studies poses risks to patients in that treatable disorders would be overlooked. The case definition has significant potential to mislead patients and its use cannot be recommended for application in any health-care or decision-making setting. PMID:26168947

  15. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    Science.gov (United States)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  16. A comparative study and validation of state estimation algorithms for Li-ion batteries in battery management systems

    International Nuclear Information System (INIS)

    Klee Barillas, Joaquín; Li, Jiahao; Günther, Clemens; Danzer, Michael A.

    2015-01-01

    Highlights: • Description of state observers for estimating the battery’s SOC. • Implementation of four estimation algorithms in a BMS. • Reliability and performance study of BMS regarding the estimation algorithms. • Analysis of the robustness and code properties of the estimation approaches. • Guide to evaluate estimation algorithms to improve the BMS performance. - Abstract: To increase lifetime, safety, and energy usage battery management systems (BMS) for Li-ion batteries have to be capable of estimating the state of charge (SOC) of the battery cells with a very low estimation error. The accurate SOC estimation and the real time reliability are critical issues for a BMS. In general an increasing complexity of the estimation methods leads to higher accuracy. On the other hand it also leads to a higher computational load and may exceed the BMS limitations or increase its costs. An approach to evaluate and verify estimation algorithms is presented as a requisite prior the release of the battery system. The approach consists of an analysis concerning the SOC estimation accuracy, the code properties, complexity, the computation time, and the memory usage. Furthermore, a study for estimation methods is proposed for their evaluation and validation with respect to convergence behavior, parameter sensitivity, initialization error, and performance. In this work, the introduced analysis is demonstrated with four of the most published model-based estimation algorithms including Luenberger observer, sliding-mode observer, Extended Kalman Filter and Sigma-point Kalman Filter. The experiments under dynamic current conditions are used to verify the real time functionality of the BMS. The results show that a simple estimation method like the sliding-mode observer can compete with the Kalman-based methods presenting less computational time and memory usage. Depending on the battery system’s application the estimation algorithm has to be selected to fulfill the

  17. Development and validation of a novel algorithm based on the ECG magnet response for rapid identification of any unknown pacemaker.

    Science.gov (United States)

    Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume

    2014-08-01

    Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  18. Experimental validation of a distributed algorithm for dynamic spectrum access in local area networks

    DEFF Research Database (Denmark)

    Tonelli, Oscar; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão

    2013-01-01

    Next generation wireless networks aim at a significant improvement of the spectral efficiency in order to meet the dramatic increase in data service demand. In local area scenarios user-deployed base stations are expected to take place, thus making the centralized planning of frequency resources...... activities with the Autonomous Component Carrier Selection (ACCS) algorithm, a distributed solution for interference management among small neighboring cells. A preliminary evaluation of the algorithm performance is provided considering its live execution on a software defined radio network testbed...

  19. Observational study to calculate addictive risk to opioids: a validation study of a predictive algorithm to evaluate opioid use disorder

    Directory of Open Access Journals (Sweden)

    Brenton A

    2017-05-01

    Full Text Available Ashley Brenton,1 Steven Richeimer,2,3 Maneesh Sharma,4 Chee Lee,1 Svetlana Kantorovich,1 John Blanchard,1 Brian Meshkin1 1Proove Biosciences, Irvine, CA, 2Keck school of Medicine, University of Southern California, Los Angeles, CA, 3Departments of Anesthesiology and Psychiatry, University of Southern California, Los Angeles, CA, 4Interventional Pain Institute, Baltimore, MD, USA Background: Opioid abuse in chronic pain patients is a major public health issue, with rapidly increasing addiction rates and deaths from unintentional overdose more than quadrupling since 1999. Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm incorporating phenotypic risk factors and neuroscience-associated single-nucleotide polymorphisms (SNPs. Patients and methods: The Proove Opioid Risk (POR algorithm determines the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm incorporating phenotypic risk factors and neuroscience-associated SNPs. In a validation study with 258 subjects with diagnosed opioid use disorder (OUD and 650 controls who reported using opioids, the POR successfully categorized patients at high and moderate risks of opioid misuse or abuse with 95.7% sensitivity. Regardless of changes in the prevalence of opioid misuse or abuse, the sensitivity of POR remained >95%. Conclusion: The POR correctly stratifies patients into low-, moderate-, and high-risk categories to appropriately identify patients at need for additional guidance, monitoring, or treatment changes. Keywords: opioid use disorder, addiction, personalized medicine, pharmacogenetics, genetic testing, predictive algorithm

  20. Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms

    Directory of Open Access Journals (Sweden)

    Jose R. Celaya

    2013-01-01

    Full Text Available As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.

  1. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    Science.gov (United States)

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Validation of a knowledge-based boundary detection algorithm: a multicenter study

    International Nuclear Information System (INIS)

    Groch, M.W.; Erwin, W.D.; Murphy, P.H.; Ali, A.; Moore, W.; Ford, P.; Qian Jianzhong; Barnett, C.A.; Lette, J.

    1996-01-01

    A completely operator-independent boundary detection algorithm for multigated blood pool (MGBP) studies has been evaluated at four medical centers. The knowledge-based boundary detector (KBBD) algorithm is nondeterministic, utilizing a priori domain knowledge in the form of rule sets for the localization of cardiac chambers and image features, providing a case-by-case method for the identification and boundary definition of the left ventricle (LV). The nondeterministic algorithm employs multiple processing pathways, where KBBD rules have been designed for conventional (CONV) imaging geometries (nominal 45 LAO, nonzoom) as well as for highly zoomed and/or caudally tilted (ZOOM) studies. The resultant ejection fractions (LVEF) from the KBBD program have been compared with the standard LVEF calculations in 253 total cases in four institutions, 157 utilizing CONV geometry and 96 utilizing ZOOM geometries. The criteria for success was a KBBD boundary adequately defined over the LV as judged by an experienced observer, and the correlation of KBBD LVEFs to the standard calculation of LVEFs for the institution. The overall success rate for all institutions combined was 99.2%, with an overall correlation coefficient of r=0.95 (P<0.001). The individual success rates and EF correlations (r), for CONV and ZOOM geometers were: 98%, r=0.93 (CONV) and 100%, r=0.95 (ZOOM). The KBBD algorithm can be adapted to varying clinical situations, employing automatic processing using artificial intelligence, with performance close to that of a human operator. (orig.)

  3. Building and Validating a Computerized Algorithm for Surveillance of Ventilator-Associated Events.

    Science.gov (United States)

    Mann, Tal; Ellsworth, Joseph; Huda, Najia; Neelakanta, Anupama; Chevalier, Thomas; Sims, Kristin L; Dhar, Sorabh; Robinson, Mary E; Kaye, Keith S

    2015-09-01

    To develop an automated method for ventilator-associated condition (VAC) surveillance and to compare its accuracy and efficiency with manual VAC surveillance The intensive care units (ICUs) of 4 hospitals This study was conducted at Detroit Medical Center, a tertiary care center in metropolitan Detroit. A total of 128 ICU beds in 4 acute care hospitals were included during the study period from August to October 2013. The automated VAC algorithm was implemented and utilized for 1 month by all study hospitals. Simultaneous manual VAC surveillance was conducted by 2 infection preventionists and 1 infection control fellow who were blinded to each another's findings and to the automated VAC algorithm results. The VACs identified by the 2 surveillance processes were compared. During the study period, 110 patients from all the included hospitals were mechanically ventilated and were evaluated for VAC for a total of 992 mechanical ventilation days. The automated VAC algorithm identified 39 VACs with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 100%. In comparison, the combined efforts of the IPs and the infection control fellow detected 58.9% of VACs, with 59% sensitivity, 99% specificity, 91% PPV, and 92% NPV. Moreover, the automated VAC algorithm was extremely efficient, requiring only 1 minute to detect VACs over a 1-month period, compared to 60.7 minutes using manual surveillance. The automated VAC algorithm is efficient and accurate and is ready to be used routinely for VAC surveillance. Furthermore, its implementation can optimize the sensitivity and specificity of VAC identification.

  4. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    Science.gov (United States)

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  5. Algorithm Development and Validation for Satellite-Derived Distributions of DOC and CDOM in the US Middle Atlantic Bight

    Science.gov (United States)

    Mannino, Antonio; Russ, Mary E.; Hooker, Stanford B.

    2007-01-01

    In coastal ocean waters, distributions of dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) vary seasonally and interannually due to multiple source inputs and removal processes. We conducted several oceanographic cruises within the continental margin of the U.S. Middle Atlantic Bight (MAB) to collect field measurements in order to develop algorithms to retrieve CDOM and DOC from NASA's MODIS-Aqua and SeaWiFS satellite sensors. In order to develop empirical algorithms for CDOM and DOC, we correlated the CDOM absorption coefficient (a(sub cdom)) with in situ radiometry (remote sensing reflectance, Rrs, band ratios) and then correlated DOC to Rrs band ratios through the CDOM to DOC relationships. Our validation analyses demonstrate successful retrieval of DOC and CDOM from coastal ocean waters using the MODIS-Aqua and SeaWiFS satellite sensors with mean absolute percent differences from field measurements of cdom)(355)1,6 % for a(sub cdom)(443), and 12% for the CDOM spectral slope. To our knowledge, the algorithms presented here represent the first validated algorithms for satellite retrieval of a(sub cdom) DOC, and CDOM spectral slope in the coastal ocean. The satellite-derived DOC and a(sub cdom) products demonstrate the seasonal net ecosystem production of DOC and photooxidation of CDOM from spring to fall. With accurate satellite retrievals of CDOM and DOC, we will be able to apply satellite observations to investigate interannual and decadal-scale variability in surface CDOM and DOC within continental margins and monitor impacts of climate change and anthropogenic activities on coastal ecosystems.

  6. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

    Science.gov (United States)

    Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R

    2016-12-13

    Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0

  7. Optimization of vitamin K antagonist drug dose finding by replacement of the international normalized ratio by a bidirectional factor : validation of a new algorithm

    NARCIS (Netherlands)

    Beinema, M J; van der Meer, F J M; Brouwers, J R B J; Rosendaal, F R

    2016-01-01

    UNLABELLED: Essentials We developed a new algorithm to optimize vitamin K antagonist dose finding. Validation was by comparing actual dosing to algorithm predictions. Predicted and actual dosing of well performing centers were highly associated. The method is promising and should be tested in a

  8. Robust surface registration using salient anatomical features for image-guided liver surgery: Algorithm and validation

    OpenAIRE

    Clements, Logan W.; Chapman, William C.; Dawant, Benoit M.; Galloway, Robert L.; Miga, Michael I.

    2008-01-01

    A successful surface-based image-to-physical space registration in image-guided liver surgery (IGLS) is critical to provide reliable guidance information to surgeons and pertinent surface displacement data for use in deformation correction algorithms. The current protocol used to perform the image-to-physical space registration involves an initial pose estimation provided by a point based registration of anatomical landmarks identifiable in both the preoperative tomograms and the intraoperati...

  9. Development and Validation of a Diabetic Retinopathy Referral Algorithm Based on Single-Field Fundus Photography.

    Directory of Open Access Journals (Sweden)

    Sangeetha Srinivasan

    Full Text Available To develop a simplified algorithm to identify and refer diabetic retinopathy (DR from single-field retinal images specifically for sight-threatening diabetic retinopathy for appropriate care (ii to determine the agreement and diagnostic accuracy of the algorithm as a pilot study among optometrists versus "gold standard" (retinal specialist grading.The severity of DR was scored based on colour photo using a colour coded algorithm, which included the lesions of DR and number of quadrants involved. A total of 99 participants underwent training followed by evaluation. Data of the 99 participants were analyzed. Fifty posterior pole 45 degree retinal images with all stages of DR were presented. Kappa scores (κ, areas under the receiver operating characteristic curves (AUCs, sensitivity and specificity were determined, with further comparison between working optometrists and optometry students.Mean age of the participants was 22 years (range: 19-43 years, 87% being women. Participants correctly identified 91.5% images that required immediate referral (κ = 0.696, 62.5% of images as requiring review after 6 months (κ = 0.462, and 51.2% of those requiring review after 1 year (κ = 0.532. The sensitivity and specificity of the optometrists were 91% and 78% for immediate referral, 62% and 84% for review after 6 months, and 51% and 95% for review after 1 year, respectively. The AUC was the highest (0.855 for immediate referral, second highest (0.824 for review after 1 year, and 0.727 for review after 6 months criteria. Optometry students performed better than the working optometrists for all grades of referral.The diabetic retinopathy algorithm assessed in this work is a simple and a fairly accurate method for appropriate referral based on single-field 45 degree posterior pole retinal images.

  10. Simulating Deformations of MR Brain Images for Validation of Atlas-based Segmentation and Registration Algorithms

    OpenAIRE

    Xue, Zhong; Shen, Dinggang; Karacali, Bilge; Stern, Joshua; Rottenberg, David; Davatzikos, Christos

    2006-01-01

    Simulated deformations and images can act as the gold standard for evaluating various template-based image segmentation and registration algorithms. Traditional deformable simulation methods, such as the use of analytic deformation fields or the displacement of landmarks followed by some form of interpolation, are often unable to construct rich (complex) and/or realistic deformations of anatomical organs. This paper presents new methods aiming to automatically simulate realistic inter- and in...

  11. Accuracy of SIAscopy for pigmented skin lesions encountered in primary care: development and validation of a new diagnostic algorithm.

    Science.gov (United States)

    Emery, Jon D; Hunter, Judith; Hall, Per N; Watson, Anthony J; Moncrieff, Marc; Walter, Fiona M

    2010-09-25

    Diagnosing pigmented skin lesions in general practice is challenging. SIAscopy has been shown to increase diagnostic accuracy for melanoma in referred populations. We aimed to develop and validate a scoring system for SIAscopic diagnosis of pigmented lesions in primary care. This study was conducted in two consecutive settings in the UK and Australia, and occurred in three stages: 1) Development of the primary care scoring algorithm (PCSA) on a sub-set of lesions from the UK sample; 2) Validation of the PCSA on a different sub-set of lesions from the same UK sample; 3) Validation of the PCSA on a new set of lesions from an Australian primary care population. Patients presenting with a pigmented lesion were recruited from 6 general practices in the UK and 2 primary care skin cancer clinics in Australia. The following data were obtained for each lesion: clinical history; SIAscan; digital photograph; and digital dermoscopy. SIAscans were interpreted by an expert and validated against histopathology where possible, or expert clinical review of all available data for each lesion. A total of 858 patients with 1,211 lesions were recruited. Most lesions were benign naevi (64.8%) or seborrhoeic keratoses (22.1%); 1.2% were melanoma. The original SIAscopic diagnostic algorithm did not perform well because of the higher prevalence of seborrhoeic keratoses and haemangiomas seen in primary care. A primary care scoring algorithm (PCSA) was developed to account for this. In the UK sample the PCSA had the following characteristics for the diagnosis of 'suspicious': sensitivity 0.50 (0.18-0.81); specificity 0.84 (0.78-0.88); PPV 0.09 (0.03-0.22); NPV 0.98 (0.95-0.99). In the Australian sample the PCSA had the following characteristics for the diagnosis of 'suspicious': sensitivity 0.44 (0.32-0.58); specificity 0.95 (0.93-0.97); PPV 0.52 (0.38-0.66); NPV 0.95 (0.92-0.96). In an analysis of lesions for which histological diagnosis was available (n = 111), the PCSA had a significantly

  12. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    Science.gov (United States)

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  13. Concurrent validity of an automated algorithm for computing the center of pressure excursion index (CPEI).

    Science.gov (United States)

    Diaz, Michelle A; Gibbons, Mandi W; Song, Jinsup; Hillstrom, Howard J; Choe, Kersti H; Pasquale, Maria R

    2018-01-01

    Center of Pressure Excursion Index (CPEI), a parameter computed from the distribution of plantar pressures during stance phase of barefoot walking, has been used to assess dynamic foot function. The original custom program developed to calculate CPEI required the oversight of a user who could manually correct for certain exceptions to the computational rules. A new fully automatic program has been developed to calculate CPEI with an algorithm that accounts for these exceptions. The purpose of this paper is to compare resulting CPEI values computed by these two programs on plantar pressure data from both asymptomatic and pathologic subjects. If comparable, the new program offers significant benefits-reduced potential for variability due to rater discretion and faster CPEI calculation. CPEI values were calculated from barefoot plantar pressure distributions during comfortable paced walking on 61 healthy asymptomatic adults, 19 diabetic adults with moderate hallux valgus, and 13 adults with mild hallux valgus. Right foot data for each subject was analyzed with linear regression and a Bland-Altman plot. The automated algorithm yielded CPEI values that were linearly related to the original program (R 2 =0.99; Pcomputation methods. Results of this analysis suggest that the new automated algorithm may be used to calculate CPEI on both healthy and pathologic feet. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. SBUV version 8.6 Retrieval Algorithm: Error Analysis and Validation Technique

    Science.gov (United States)

    Kramarova, N. A.; Bhartia, P. K.; Frith, P. K.; McPeters, S. M.; Labow, R. D.; Taylor, G.; Fisher, S.; DeLand, M.

    2012-01-01

    SBUV version 8.6 algorithm was used to reprocess data from the Back Scattered Ultra Violet (BUV), the Solar Back Scattered Ultra Violet (SBUV) and a number of SBUV/2 instruments, which 'span a 41-year period from 1970 to 2011 (except a 5-year gap in the 1970s)[see Bhartia et al, 2012]. In the new version Daumont et al. [1992] ozone cross section were used, and new ozone [McPeters et ai, 2007] and cloud climatologies Doiner and Bhartia, 1995] were implemented. The algorithm uses the Optimum Estimation technique [Rodgers, 2000] to retrieve ozone profiles as ozone layer (partial column, DU) on 21 pressure layers. The corresponding total ozone values are calculated by summing ozone columns at individual layers. The algorithm is optimized to accurately retrieve monthly zonal mean (mzm) profiles rather than an individual profile, since it uses monthly zonal mean ozone climatology as the A Priori. Thus, the SBUV version 8.6 ozone dataset is better suited for long-term trend analysis and monitoring ozone changes rather than for studying short-term ozone variability. Here we discuss some characteristics of the SBUV algorithm and sources of error in the SBUV profile and total ozone retrievals. For the first time the Averaging Kernels, smoothing errors and weighting functions (or Jacobians) are included in the SBUV metadata. The Averaging Kernels (AK) represent the sensitivity of the retrieved profile to the true state and contain valuable information about the retrieval algorithm, such as Vertical Resolution, Degrees of Freedom for Signals (DFS) and Retrieval Efficiency [Rodgers, 2000]. Analysis of AK for mzm ozone profiles shows that the total number of DFS for ozone profiles varies from 4.4 to 5.5 out of 6-9 wavelengths used for retrieval. The number of wavelengths in turn depends on solar zenith angles. Between 25 and 0.5 hPa, where SBUV vertical resolution is the highest, DFS for individual layers are about 0.5.

  15. Algorithm Development and Validation of CDOM Properties for Estuarine and Continental Shelf Waters Along the Northeastern U.S. Coast

    Science.gov (United States)

    Mannino, Antonio; Novak, Michael G.; Hooker, Stanford B.; Hyde, Kimberly; Aurin, Dick

    2014-01-01

    An extensive set of field measurements have been collected throughout the continental margin of the northeastern U.S. from 2004 to 2011 to develop and validate ocean color satellite algorithms for the retrieval of the absorption coefficient of chromophoric dissolved organic matter (aCDOM) and CDOM spectral slopes for the 275:295 nm and 300:600 nm spectral range (S275:295 and S300:600). Remote sensing reflectance (Rrs) measurements computed from in-water radiometry profiles along with aCDOM() data are applied to develop several types of algorithms for the SeaWiFS and MODIS-Aqua ocean color satellite sensors, which involve least squares linear regression of aCDOM() with (1) Rrs band ratios, (2) quasi-analytical algorithm-based (QAA based) products of total absorption coefficients, (3) multiple Rrs bands within a multiple linear regression (MLR) analysis, and (4) diffuse attenuation coefficient (Kd). The relative error (mean absolute percent difference; MAPD) for the MLR retrievals of aCDOM(275), aCDOM(355), aCDOM(380), aCDOM(412) and aCDOM(443) for our study region range from 20.4-23.9 for MODIS-Aqua and 27.3-30 for SeaWiFS. Because of the narrower range of CDOM spectral slope values, the MAPD for the MLR S275:295 and QAA-based S300:600 algorithms are much lower ranging from 9.9 and 8.3 for SeaWiFS, respectively, and 8.7 and 6.3 for MODIS, respectively. Seasonal and spatial MODIS-Aqua and SeaWiFS distributions of aCDOM, S275:295 and S300:600 processed with these algorithms are consistent with field measurements and the processes that impact CDOM levels along the continental shelf of the northeastern U.S. Several satellite data processing factors correlate with higher uncertainty in satellite retrievals of aCDOM, S275:295 and S300:600 within the coastal ocean, including solar zenith angle, sensor viewing angle, and atmospheric products applied for atmospheric corrections. Algorithms that include ultraviolet Rrs bands provide a better fit to field measurements than

  16. Uncertainty analysis of an interfacial area reconstruction algorithm and its application to two group interfacial area transport equation validation

    International Nuclear Information System (INIS)

    Dave, A.J.; Manera, A.; Beyer, M.; Lucas, D.; Prasser, H.-M.

    2016-01-01

    Wire mesh sensors (WMS) are state of the art devices that allow high resolution (in space and time) measurement of 2D void fraction distribution over a wide range of two-phase flow regimes, from bubbly to annular. Data using WMS have been recorded at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) (Lucas et al., 2010; Beyer et al., 2008; Prasser et al., 2003) for a wide combination of superficial gas and liquid velocities, providing an excellent database for advances in two-phase flow modeling. In two-phase flow, the interfacial area plays an integral role in coupling the mass, momentum and energy transport equations of the liquid and gas phase. While current models used in best-estimate thermal-hydraulic codes (e.g. RELAP5, TRACE, TRACG, etc.) are still based on algebraic correlations for the estimation of the interfacial area in different flow regimes, interfacial area transport equations (IATE) have been proposed to predict the dynamic propagation in space and time of interfacial area (Ishii and Hibiki, 2010). IATE models are still under development and the HZDR WMS experimental data provide an excellent basis for the validation and further advance of these models. The current paper is focused on the uncertainty analysis of algorithms used to reconstruct interfacial area densities from the void-fraction voxel data measured using WMS and their application towards validation efforts of two-group IATE models. In previous research efforts, a surface triangularization algorithm has been developed in order to estimate the surface area of individual bubbles recorded with the WMS, and estimate the interfacial area in the given flow condition. In the present paper, synthetically generated bubbles are used to assess the algorithm’s accuracy. As the interfacial area of the synthetic bubbles are defined by user inputs, the error introduced by the algorithm can be quantitatively obtained. The accuracy of interfacial area measurements is characterized for different bubbles

  17. Uncertainty analysis of an interfacial area reconstruction algorithm and its application to two group interfacial area transport equation validation

    Energy Technology Data Exchange (ETDEWEB)

    Dave, A.J., E-mail: akshayjd@umich.edu [Department of Nuclear Engineering and Rad. Sciences, University of Michigan, Ann Arbor, MI 48105 (United States); Manera, A. [Department of Nuclear Engineering and Rad. Sciences, University of Michigan, Ann Arbor, MI 48105 (United States); Beyer, M.; Lucas, D. [Helmholtz-Zentrum Dresden-Rossendorf, Institute of Fluid Dynamics, 01314 Dresden (Germany); Prasser, H.-M. [Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich (Switzerland)

    2016-12-15

    Wire mesh sensors (WMS) are state of the art devices that allow high resolution (in space and time) measurement of 2D void fraction distribution over a wide range of two-phase flow regimes, from bubbly to annular. Data using WMS have been recorded at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) (Lucas et al., 2010; Beyer et al., 2008; Prasser et al., 2003) for a wide combination of superficial gas and liquid velocities, providing an excellent database for advances in two-phase flow modeling. In two-phase flow, the interfacial area plays an integral role in coupling the mass, momentum and energy transport equations of the liquid and gas phase. While current models used in best-estimate thermal-hydraulic codes (e.g. RELAP5, TRACE, TRACG, etc.) are still based on algebraic correlations for the estimation of the interfacial area in different flow regimes, interfacial area transport equations (IATE) have been proposed to predict the dynamic propagation in space and time of interfacial area (Ishii and Hibiki, 2010). IATE models are still under development and the HZDR WMS experimental data provide an excellent basis for the validation and further advance of these models. The current paper is focused on the uncertainty analysis of algorithms used to reconstruct interfacial area densities from the void-fraction voxel data measured using WMS and their application towards validation efforts of two-group IATE models. In previous research efforts, a surface triangularization algorithm has been developed in order to estimate the surface area of individual bubbles recorded with the WMS, and estimate the interfacial area in the given flow condition. In the present paper, synthetically generated bubbles are used to assess the algorithm’s accuracy. As the interfacial area of the synthetic bubbles are defined by user inputs, the error introduced by the algorithm can be quantitatively obtained. The accuracy of interfacial area measurements is characterized for different bubbles

  18. Algorithms to identify colonic ischemia, complications of constipation and irritable bowel syndrome in medical claims data: development and validation.

    Science.gov (United States)

    Sands, Bruce E; Duh, Mei-Sheng; Cali, Clorinda; Ajene, Anuli; Bohn, Rhonda L; Miller, David; Cole, J Alexander; Cook, Suzanne F; Walker, Alexander M

    2006-01-01

    A challenge in the use of insurance claims databases for epidemiologic research is accurate identification and verification of medical conditions. This report describes the development and validation of claims-based algorithms to identify colonic ischemia, hospitalized complications of constipation, and irritable bowel syndrome (IBS). From the research claims databases of a large healthcare company, we selected at random 120 potential cases of IBS and 59 potential cases each of colonic ischemia and hospitalized complications of constipation. We sought the written medical records and were able to abstract 107, 57, and 51 records, respectively. We established a 'true' case status for each subject by applying standard clinical criteria to the available chart data. Comparing the insurance claims histories to the assigned case status, we iteratively developed, tested, and refined claims-based algorithms that would capture the diagnoses obtained from the medical records. We set goals of high specificity for colonic ischemia and hospitalized complications of constipation, and high sensitivity for IBS. The resulting algorithms substantially improved on the accuracy achievable from a naïve acceptance of the diagnostic codes attached to insurance claims. The specificities for colonic ischemia and serious complications of constipation were 87.2 and 92.7%, respectively, and the sensitivity for IBS was 98.9%. U.S. commercial insurance claims data appear to be usable for the study of colonic ischemia, IBS, and serious complications of constipation. (c) 2005 John Wiley & Sons, Ltd.

  19. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    Science.gov (United States)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  20. Validation of new 3D post processing algorithm for improved maximum intensity projections of MR angiography acquisitions in the brain

    Energy Technology Data Exchange (ETDEWEB)

    Bosmans, H; Verbeeck, R; Vandermeulen, D; Suetens, P; Wilms, G; Maaly, M; Marchal, G; Baert, A L [Louvain Univ. (Belgium)

    1995-12-01

    The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final `background region` whereas cortical blood vessels and all brain tissues are included in the `brain region`. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms.

  1. Validation of new 3D post processing algorithm for improved maximum intensity projections of MR angiography acquisitions in the brain

    International Nuclear Information System (INIS)

    Bosmans, H.; Verbeeck, R.; Vandermeulen, D.; Suetens, P.; Wilms, G.; Maaly, M.; Marchal, G.; Baert, A.L.

    1995-01-01

    The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final 'background region' whereas cortical blood vessels and all brain tissues are included in the 'brain region'. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms

  2. Case definition for Ebola and Marburg haemorrhagic fevers: a complex challenge for epidemiologists and clinicians.

    Science.gov (United States)

    Pittalis, Silvia; Fusco, Francesco Maria; Lanini, Simone; Nisii, Carla; Puro, Vincenzo; Lauria, Francesco Nicola; Ippolito, Giuseppe

    2009-10-01

    Viral haemorrhagic fevers (VHFs) represent a challenge for public health because of their epidemic potential, and their possible use as bioterrorism agents poses particular concern. In 1999 the World Health Organization (WHO) proposed a case definition for VHFs, subsequently adopted by other international institutions with the aim of early detection of initial cases/outbreaks in western countries. We applied this case definition to reports of Ebola and Marburg virus infections to estimate its sensitivity to detect cases of the disease. We analyzed clinical descriptions of 795 reported cases of Ebola haemorrhagic fever: only 58.5% of patients met the proposed case definition. A similar figure was obtained reviewing 169 cases of Marburg diseases, of which only 64.5% were in accordance with the case definition. In conclusion, the WHO case definition for hemorrhagic fevers is too specific and has poor sensitivity both for case finding during Ebola or Marburg outbreaks, and for early detection of suspected cases in western countries. It can lead to a hazardous number of false negatives and its use should be discouraged for early detection of cases.

  3. Revision of clinical case definitions: influenza-like illness and severe acute respiratory infection

    Science.gov (United States)

    Qasmieh, Saba; Mounts, Anthony Wayne; Alexander, Burmaa; Besselaar, Terry; Briand, Sylvie; Brown, Caroline; Clark, Seth; Dueger, Erica; Gross, Diane; Hauge, Siri; Hirve, Siddhivinayak; Jorgensen, Pernille; Katz, Mark A; Mafi, Ali; Malik, Mamunur; McCarron, Margaret; Meerhoff, Tamara; Mori, Yuichiro; Mott, Joshua; Olivera, Maria Teresa da Costa; Ortiz, Justin R; Palekar, Rakhee; Rebelo-de-Andrade, Helena; Soetens, Loes; Yahaya, Ali Ahmed; Zhang, Wenqing; Vandemaele, Katelijn

    2018-01-01

    Abstract The formulation of accurate clinical case definitions is an integral part of an effective process of public health surveillance. Although such definitions should, ideally, be based on a standardized and fixed collection of defining criteria, they often require revision to reflect new knowledge of the condition involved and improvements in diagnostic testing. Optimal case definitions also need to have a balance of sensitivity and specificity that reflects their intended use. After the 2009–2010 H1N1 influenza pandemic, the World Health Organization (WHO) initiated a technical consultation on global influenza surveillance. This prompted improvements in the sensitivity and specificity of the case definition for influenza – i.e. a respiratory disease that lacks uniquely defining symptomology. The revision process not only modified the definition of influenza-like illness, to include a simplified list of the criteria shown to be most predictive of influenza infection, but also clarified the language used for the definition, to enhance interpretability. To capture severe cases of influenza that required hospitalization, a new case definition was also developed for severe acute respiratory infection in all age groups. The new definitions have been found to capture more cases without compromising specificity. Despite the challenge still posed in the clinical separation of influenza from other respiratory infections, the global use of the new WHO case definitions should help determine global trends in the characteristics and transmission of influenza viruses and the associated disease burden. PMID:29403115

  4. Tuberculous meningitis: a uniform case definition for use in clinical research.

    Science.gov (United States)

    Marais, Suzaan; Thwaites, Guy; Schoeman, Johan F; Török, M Estée; Misra, Usha K; Prasad, Kameshwar; Donald, Peter R; Wilkinson, Robert J; Marais, Ben J

    2010-11-01

    Tuberculous meningitis causes substantial mortality and morbidity in children and adults. More research is urgently needed to better understand the pathogenesis of disease and to improve its clinical management and outcome. A major stumbling block is the absence of standardised diagnostic criteria. The different case definitions used in various studies makes comparison of research findings difficult, prevents the best use of existing data, and limits the management of disease. To address this problem, a 3-day tuberculous meningitis workshop took place in Cape Town, South Africa, and was attended by 41 international participants experienced in the research or management of tuberculous meningitis. During the meeting, diagnostic criteria were assessed and discussed, after which a writing committee was appointed to finalise a consensus case definition for tuberculous meningitis for use in future clinical research. We present the consensus case definition together with the rationale behind the recommendations. This case definition is applicable irrespective of the patient's age, HIV infection status, or the resources available in the research setting. Consistent use of the proposed case definition will aid comparison of studies, improve scientific communication, and ultimately improve care. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    Energy Technology Data Exchange (ETDEWEB)

    Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Feist, Chris [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  6. Total ozone column derived from GOME and SCIAMACHY using KNMI retrieval algorithms: Validation against Brewer measurements at the Iberian Peninsula

    Science.gov (United States)

    Antón, M.; Kroon, M.; López, M.; Vilaplana, J. M.; Bañón, M.; van der A, R.; Veefkind, J. P.; Stammes, P.; Alados-Arboledas, L.

    2011-11-01

    This article focuses on the validation of the total ozone column (TOC) data set acquired by the Global Ozone Monitoring Experiment (GOME) and the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) satellite remote sensing instruments using the Total Ozone Retrieval Scheme for the GOME Instrument Based on the Ozone Monitoring Instrument (TOGOMI) and Total Ozone Retrieval Scheme for the SCIAMACHY Instrument Based on the Ozone Monitoring Instrument (TOSOMI) retrieval algorithms developed by the Royal Netherlands Meteorological Institute. In this analysis, spatially colocated, daily averaged ground-based observations performed by five well-calibrated Brewer spectrophotometers at the Iberian Peninsula are used. The period of study runs from January 2004 to December 2009. The agreement between satellite and ground-based TOC data is excellent (R2 higher than 0.94). Nevertheless, the TOC data derived from both satellite instruments underestimate the ground-based data. On average, this underestimation is 1.1% for GOME and 1.3% for SCIAMACHY. The SCIAMACHY-Brewer TOC differences show a significant solar zenith angle (SZA) dependence which causes a systematic seasonal dependence. By contrast, GOME-Brewer TOC differences show no significant SZA dependence and hence no seasonality although processed with exactly the same algorithm. The satellite-Brewer TOC differences for the two satellite instruments show a clear and similar dependence on the viewing zenith angle under cloudy conditions. In addition, both the GOME-Brewer and SCIAMACHY-Brewer TOC differences reveal a very similar behavior with respect to the satellite cloud properties, being cloud fraction and cloud top pressure, which originate from the same cloud algorithm (Fast Retrieval Scheme for Clouds from the Oxygen A-Band (FRESCO+)) in both the TOSOMI and TOGOMI retrieval algorithms.

  7. Can PSA Reflex Algorithm be a valid alternative to other PSA-based prostate cancer screening strategies?

    Science.gov (United States)

    Caldarelli, G; Troiano, G; Rosadini, D; Nante, N

    2017-01-01

    The available laboratory tests for the differential diagnosis of prostate cancer, are represented by the total PSA, the free PSA, and the free/total PSA ratio. In Italy most of doctors tend to request both total and free PSA for their patients even in cases where the total PSA doesn't justify the further request of free PSA, with a consequent growth of the costs for the National Health System. The aim of our study was to predict the saving in Euro (due to reagents) and reduction in free PSA tests, applying the "PSA Reflex" algorithm. We calculated the number of total PSA and free PSA exams performed in 2014 in the Hospital of Grosseto and, simulating the application of the "PSA Reflex" algorithm in the same year, we calculated the decrease in the number of free PSA requests and we tried to predict the Euro savings in reagents, obtained from this reduction. In 2014 in the Hospital of Grosseto 25,955 total PSA tests have been performed: 3,631 (14%) resulted greater than 10 ng / ml; 7,686 (29.6%) between 2 and 10 ng / ml; 14,638 (56.4%) lower than 2 ng / ml. The performed free PSA tests were 16904. Simulating the use of "PSA Reflex" algorithm, the free PSA tests would be performed only in cases with total PSA values between 2 and 10 ng / mL with a saving of 54.5% of free PSA exams and of 8,971 euros, only for reagents. Our study showed that the "PSA Reflex" algorithm is a valid alternative leading to a reduction of the costs. The estimated intralaboratory savings, due to the reagents, seem to be modest, however, they are followed by the additional savings due to the other diagnostic processes for prostate cancers.

  8. Development and validation of an algorithm for identifying urinary retention in a cohort of patients with epilepsy in a large US administrative claims database.

    Science.gov (United States)

    Quinlan, Scott C; Cheng, Wendy Y; Ishihara, Lianna; Irizarry, Michael C; Holick, Crystal N; Duh, Mei Sheng

    2016-04-01

    The aim of this study was to develop and validate an insurance claims-based algorithm for identifying urinary retention (UR) in epilepsy patients receiving antiepileptic drugs to facilitate safety monitoring. Data from the HealthCore Integrated Research Database(SM) in 2008-2011 (retrospective) and 2012-2013 (prospective) were used to identify epilepsy patients with UR. During the retrospective phase, three algorithms identified potential UR: (i) UR diagnosis code with a catheterization procedure code; (ii) UR diagnosis code alone; or (iii) diagnosis with UR-related symptoms. Medical records for 50 randomly selected patients satisfying ≥1 algorithm were reviewed by urologists to ascertain UR status. Positive predictive value (PPV) and 95% confidence intervals (CI) were calculated for the three component algorithms and the overall algorithm (defined as satisfying ≥1 component algorithms). Algorithms were refined using urologist review notes. In the prospective phase, the UR algorithm was refined using medical records for an additional 150 cases. In the retrospective phase, the PPV of the overall algorithm was 72.0% (95%CI: 57.5-83.8%). Algorithm 3 performed poorly and was dropped. Algorithm 1 was unchanged; urinary incontinence and cystitis were added as exclusionary diagnoses to Algorithm 2. The PPV for the modified overall algorithm was 89.2% (74.6-97.0%). In the prospective phase, the PPV for the modified overall algorithm was 76.0% (68.4-82.6%). Upon adding overactive bladder, nocturia and urinary frequency as exclusionary diagnoses, the PPV for the final overall algorithm was 81.9% (73.7-88.4%). The current UR algorithm yielded a PPV > 80% and could be used for more accurate identification of UR among epilepsy patients in a large claims database. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Content validation of a standardized algorithm for ostomy care.

    Science.gov (United States)

    Beitz, Janice; Gerlach, Mary; Ginsburg, Pat; Ho, Marianne; McCann, Eileen; Schafer, Vickie; Scott, Vera; Stallings, Bobbie; Turnbull, Gwen

    2010-10-01

    The number of ostomy care clinician experts is limited and the majority of ostomy care is provided by non-specialized clinicians or unskilled caregivers and family. The purpose of this study was to obtain content validation data for a new standardized algorithm for ostomy care developed by expert wound ostomy continence nurse (WOCN) clinicians. After face validity was established using overall review and suggestions from WOCN experts, 166 WOCNs self-identified as having expertise in ostomy care were surveyed online for 6 weeks in 2009. Using a cross-sectional, mixed methods study design and a 30-item instrument with a 4-point Likert-type scale, the participants were asked to quantify the degree of validity of the Ostomy Algorithm's decisions and components. Participants' open-ended comments also were thematically analyzed. Using a scale of 1 to 4, the mean score of the entire algorithm was 3.8 (4 = relevant/very relevant). The algorithm's content validity index (CVI) was 0.95 (out of 1.0). Individual component mean scores ranged from 3.59 to 3.91. Individual CVIs ranged from 0.90 to 0.98. Qualitative data analysis revealed themes of difficulty associated with algorithm formatting, especially orientation and use of the Studio Alterazioni Cutanee Stomali (Study on Peristomal Skin Lesions [SACS™ Instrument]) and the inability of algorithms to capture all individual patient attributes affecting ostomy care. Positive themes included content thoroughness and the helpful clinical photos. Suggestions were offered for algorithm improvement. Study results support the strong content validity of the algorithm and research to ascertain its construct validity and effect on care outcomes is warranted.

  10. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    Science.gov (United States)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  11. Flexible job-shop scheduling based on genetic algorithm and simulation validation

    Directory of Open Access Journals (Sweden)

    Zhou Erming

    2017-01-01

    Full Text Available This paper selects flexible job-shop scheduling problem as the research object, and Constructs mathematical model aimed at minimizing the maximum makespan. Taking the transmission reverse gear production line of a transmission corporation as an example, genetic algorithm is applied for flexible jobshop scheduling problem to get the specific optimal scheduling results with MATLAB. DELMIA/QUEST based on 3D discrete event simulation is applied to construct the physical model of the production workshop. On the basis of the optimal scheduling results, the logical link of the physical model for the production workshop is established, besides, importing the appropriate process parameters to make virtual simulation on the production workshop. Finally, through analyzing the simulated results, it shows that the scheduling results are effective and reasonable.

  12. Development and Validation of a Spike Detection and Classification Algorithm Aimed at Implementation on Hardware Devices

    Directory of Open Access Journals (Sweden)

    E. Biffi

    2010-01-01

    Full Text Available Neurons cultured in vitro on MicroElectrode Array (MEA devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a statistical analysis on both simulated and real signal and (b Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems.

  13. Creating and validating an algorithm to measure AIDS mortality in the adult population using verbal autopsy.

    Directory of Open Access Journals (Sweden)

    Ben A Lopman

    2006-08-01

    Full Text Available Vital registration and cause of death reporting is incomplete in the countries in which the HIV epidemic is most severe. A reliable tool that is independent of HIV status is needed for measuring the frequency of AIDS deaths and ultimately the impact of antiretroviral therapy on mortality.A verbal autopsy questionnaire was administered to caregivers of 381 adults of known HIV status who died between 1998 and 2003 in Manicaland, eastern Zimbabwe. Individuals who were HIV positive and did not die in an accident or during childbirth (74%; n = 282 were considered to have died of AIDS in the gold standard. Verbal autopsies were randomly allocated to a training dataset (n = 279 to generate classification criteria or a test dataset (n = 102 to verify criteria. A rule-based algorithm created to minimise false positives had a specificity of 66% and a sensitivity of 76%. Eight predictors (weight loss, wasting, jaundice, herpes zoster, presence of abscesses or sores, oral candidiasis, acute respiratory tract infections, and vaginal tumours were included in the algorithm. In the test dataset of verbal autopsies, 69% of deaths were correctly classified as AIDS/non-AIDS, and it was not necessary to invoke a differential diagnosis of tuberculosis. Presence of any one of these criteria gave a post-test probability of AIDS death of 0.84.Analysis of verbal autopsy data in this rural Zimbabwean population revealed a distinct pattern of signs and symptoms associated with AIDS mortality. Using these signs and symptoms, demographic surveillance data on AIDS deaths may allow for the estimation of AIDS mortality and even HIV prevalence.

  14. Development and Validation of the Pediatric Medical Complexity Algorithm (PMCA) Version 3.0.

    Science.gov (United States)

    Simon, Tamara D; Haaland, Wren; Hawley, Katherine; Lambka, Karen; Mangione-Smith, Rita

    2018-02-26

    To modify the Pediatric Medical Complexity Algorithm (PMCA) to include both International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9/10-CM) codes for classifying children with chronic disease (CD) by level of medical complexity and to assess the sensitivity and specificity of the new PMCA version 3.0 for correctly identifying level of medical complexity. To create version 3.0, PMCA version 2.0 was modified to include ICD-10-CM codes. We applied PMCA version 3.0 to Seattle Children's Hospital data for children with ≥1 emergency department (ED), day surgery, and/or inpatient encounter from January 1, 2016, to June 30, 2017. Starting with the encounter date, up to 3 years of retrospective discharge data were used to classify children as having complex chronic disease (C-CD), noncomplex chronic disease (NC-CD), and no CD. We then selected a random sample of 300 children (100 per CD group). Blinded medical record review was conducted to ascertain the levels of medical complexity for these 300 children. The sensitivity and specificity of PMCA version 3.0 was assessed. PMCA version 3.0 identified children with C-CD with 86% sensitivity and 86% specificity, children with NC-CD with 65% sensitivity and 84% specificity, and children without CD with 77% sensitivity and 93% specificity. PMCA version 3.0 is an updated publicly available algorithm that identifies children with C-CD, who have accessed tertiary hospital emergency department, day surgery, or inpatient care, with very good sensitivity and specificity when applied to hospital discharge data and with performance to earlier versions of PMCA. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  15. Algorithm Validation of the Current Profile Reconstruction of EAST Based on Polarimeter/Interferometer

    International Nuclear Information System (INIS)

    Qian Jinping; Ren Qilong; Wan Baonian; Liu Haiqin; Zeng Long; Luo Zhengping; Chen Dalong; Shi Tonghui; Sun Youwen; Shen Biao; Xiao Bingjia; Lao, L. L.; Hanada, K.

    2015-01-01

    The method of plasma current profile reconstruction using the polarimeter/interferometer (POINT) data from a simulated equilibrium is explored and validated. It is shown that the safety factor (q) profile can be generally reconstructed from the external magnetic and POINT data. The reconstructed q profile is found to reasonably agree with the initial equilibriums. Comparisons of reconstructed q and density profiles using the magnetic data and the POINT data with 3%, 5% and 10% random errors are investigated. The result shows that the POINT data could be used to a reasonably accurate determination of the q profile. (fusion engineering)

  16. A uniformly valid approximation algorithm for nonlinear ordinary singular perturbation problems with boundary layer solutions.

    Science.gov (United States)

    Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin

    2016-01-01

    This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.

  17. Validating the Western Trauma Association algorithm for managing patients with anterior abdominal stab wounds: a Western Trauma Association multicenter trial.

    Science.gov (United States)

    Biffl, Walter L; Kaups, Krista L; Pham, Tam N; Rowell, Susan E; Jurkovich, Gregory J; Burlew, Clay Cothren; Elterman, J; Moore, Ernest E

    2011-12-01

    condition; 17 (45%) of these patients had a NONTHER LAP. Eighteen (23%) patients were D/C'ed from the emergency department. The LOS was no different among patients who had immediate or delayed LAP. Mean LOS after NONTHER LAP was 3.6 days ± 0.8 days. The WTA proposed algorithm is designed for cost-effectiveness. Serial clinical assessments can be performed without the added expense of CT, DPL, or laparoscopy. Patients requiring LAP generally manifest early in their course, and there does not appear to be any morbidity related to a delay to OR. These data validate this approach and should be confirmed in a larger number of patients to more convincingly evaluate the algorithm's safety and cost-effectiveness compared with other approaches.

  18. A 30+ Year AVHRR LAI and FAPAR Climate Data Record: Algorithm Description, Validation, and Case Study

    Science.gov (United States)

    Claverie, Martin; Matthews, Jessica L.; Vermote, Eric F.; Justice, Christopher O.

    2016-01-01

    In- land surface models, which are used to evaluate the role of vegetation in the context ofglobal climate change and variability, LAI and FAPAR play a key role, specifically with respect to thecarbon and water cycles. The AVHRR-based LAIFAPAR dataset offers daily temporal resolution,an improvement over previous products. This climate data record is based on a carefully calibratedand corrected land surface reflectance dataset to provide a high-quality, consistent time-series suitablefor climate studies. It spans from mid-1981 to the present. Further, this operational dataset is availablein near real-time allowing use for monitoring purposes. The algorithm relies on artificial neuralnetworks calibrated using the MODIS LAI/FAPAR dataset. Evaluation based on cross-comparisonwith MODIS products and in situ data show the dataset is consistent and reliable with overalluncertainties of 1.03 and 0.15 for LAI and FAPAR, respectively. However, a clear saturation effect isobserved in the broadleaf forest biomes with high LAI (greater than 4.5) and FAPAR (greater than 0.8) values.

  19. Towards adaptive radiotherapy for head and neck patients: validation of an in-house deformable registration algorithm

    Science.gov (United States)

    Veiga, C.; McClelland, J.; Moinuddin, S.; Ricketts, K.; Modat, M.; Ourselin, S.; D'Souza, D.; Royle, G.

    2014-03-01

    The purpose of this work is to validate an in-house deformable image registration (DIR) algorithm for adaptive radiotherapy for head and neck patients. We aim to use the registrations to estimate the "dose of the day" and assess the need to replan. NiftyReg is an open-source implementation of the B-splines deformable registration algorithm, developed in our institution. We registered a planning CT to a CBCT acquired midway through treatment for 5 HN patients that required replanning. We investigated 16 different parameter settings that previously showed promising results. To assess the registrations, structures delineated in the CT were warped and compared with contours manually drawn by the same clinical expert on the CBCT. This structure set contained vertebral bodies and soft tissue. Dice similarity coefficient (DSC), overlap index (OI), centroid position and distance between structures' surfaces were calculated for every registration, and a set of parameters that produces good results for all datasets was found. We achieve a median value of 0.845 in DSC, 0.889 in OI, error smaller than 2 mm in centroid position and over 90% of the warped surface pixels are distanced less than 2 mm of the manually drawn ones. By using appropriate DIR parameters, we are able to register the planning geometry (pCT) to the daily geometry (CBCT).

  20. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  1. Validation of new satellite aerosol optical depth retrieval algorithm using Raman lidar observations at radiative transfer laboratory in Warsaw

    Science.gov (United States)

    Zawadzka, Olga; Stachlewska, Iwona S.; Markowicz, Krzysztof M.; Nemuc, Anca; Stebel, Kerstin

    2018-04-01

    During an exceptionally warm September of 2016, the unique, stable weather conditions over Poland allowed for an extensive testing of the new algorithm developed to improve the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) aerosol optical depth (AOD) retrieval. The development was conducted in the frame of the ESA-ESRIN SAMIRA project. The new AOD algorithm aims at providing the aerosol optical depth maps over the territory of Poland with a high temporal resolution of 15 minutes. It was tested on the data set obtained between 11-16 September 2016, during which a day of relatively clean atmospheric background related to an Arctic airmass inflow was surrounded by a few days with well increased aerosol load of different origin. On the clean reference day, for estimating surface reflectance the AOD forecast available on-line via the Copernicus Atmosphere Monitoring Service (CAMS) was used. The obtained AOD maps were validated against AODs available within the Poland-AOD and AERONET networks, and with AOD values obtained from the PollyXT-UW lidar. of the University of Warsaw (UW).

  2. The Development of Several Electromagnetic Monitoring Strategies and Algorithms for Validating Pre-Earthquake Electromagnetic Signals

    Science.gov (United States)

    Bleier, T. E.; Dunson, J. C.; Roth, S.; Mueller, S.; Lindholm, C.; Heraud, J. A.

    2012-12-01

    QuakeFinder, a private research group in California, reports on the development of a 100+ station network consisting of 3-axis induction magnetometers, and air conductivity sensors to collect and characterize pre-seismic electromagnetic (EM) signals. These signals are combined with daily Infra Red signals collected from the GOES weather satellite infrared (IR) instrument to compare and correlate with the ground EM signals, both from actual earthquakes and boulder stressing experiments. This presentation describes the efforts QuakeFinder has undertaken to automatically detect these pulse patterns using their historical data as a reference, and to develop other discriminative algorithms that can be used with air conductivity sensors, and IR instruments from the GOES satellites. The overall big picture results of the QuakeFinder experiment are presented. In 2007, QuakeFinder discovered the occurrence of strong uni-polar pulses in their magnetometer coil data that increased in tempo dramatically prior to the M5.1 earthquake at Alum Rock, California. Suggestions that these pulses might have been lightning or power-line arcing did not fit with the data actually recorded as was reported in Bleier [2009]. Then a second earthquake occurred near the same site on January 7, 2010 as was reported in Dunson [2011], and the pattern of pulse count increases before the earthquake occurred similarly to the 2007 event. There were fewer pulses, and the magnitude of them was decreased, both consistent with the fact that the earthquake was smaller (M4.0 vs M5.4) and farther away (7Km vs 2km). At the same time similar effects were observed at the QuakeFinder Tacna, Peru site before the May 5th, 2010 M6.2 earthquake and a cluster of several M4-5 earthquakes.

  3. SU-E-T-347: Validation of the Condensed History Algorithm of Geant4 Using the Fano Test

    International Nuclear Information System (INIS)

    Lee, H; Mathis, M; Sawakuchi, G

    2014-01-01

    Purpose: To validate the condensed history algorithm and physics of the Geant4 Monte Carlo toolkit for simulations of ionization chambers (ICs). This study is the first step to validate Geant4 for calculations of photon beam quality correction factors under the presence of a strong magnetic field for magnetic resonance guided linac system applications. Methods: The electron transport and boundary crossing algorithms of Geant4 version 9.6.p02 were tested under Fano conditions using the Geant4 example/application FanoCavity. User-defined parameters of the condensed history and multiple scattering algorithms were investigated under Fano test conditions for three scattering models (physics lists): G4UrbanMscModel95 (PhysListEmStandard-option3), G4GoudsmitSaundersonMsc (PhysListEmStandard-GS), and G4WentzelVIModel/G4CoulombScattering (PhysListEmStandard-WVI). Simulations were conducted using monoenergetic photon beams, ranging from 0.5 to 7 MeV and emphasizing energies from 0.8 to 3 MeV. Results: The GS and WVI physics lists provided consistent Fano test results (within ±0.5%) for maximum step sizes under 0.01 mm at 1.25 MeV, with improved performance at 3 MeV (within ±0.25%). The option3 physics list provided consistent Fano test results (within ±0.5%) for maximum step sizes above 1 mm. Optimal parameters for the option3 physics list were 10 km maximum step size with default values for other user-defined parameters: 0.2 dRoverRange, 0.01 mm final range, 0.04 range factor, 2.5 geometrical factor, and 1 skin. Simulations using the option3 physics list were ∼70 – 100 times faster compared to GS and WVI under optimal parameters. Conclusion: This work indicated that the option3 physics list passes the Fano test within ±0.5% when using a maximum step size of 10 km for energies suitable for IC calculations in a 6 MV spectrum without extensive computational times. Optimal user-defined parameters using the option3 physics list will be used in future IC simulations to

  4. Public health implications of using various case definitions in The Netherlands during the worldwide SARS outbreak.

    NARCIS (Netherlands)

    Timen, A.; Doornum, G.J.J. van; Schutten, M.; Conyn-van Spaendonck, M.A.; Meer, J.W.M. van der; Osterhaus, A.D.; Steenbergen, J.E. van

    2006-01-01

    This study analysed the consequences of deviation from the WHO case definition for the assessment of patients with suspected severe acute respiratory syndrome (SARS) in The Netherlands during 2003. Between 17 March and 7 July 2003, as a result of dilemmas in balancing sensitivity and specificity,

  5. Revision of clinical case definitions: influenza-like illness and severe acute respiratory infection.

    NARCIS (Netherlands)

    Fitzner, Julia; Qasmieh, Saba; Mounts, Anthony Wayne; Alexander, Burmaa; Besselaar, Terry; Briand, Sylvie; Brown, Caroline; Clark, Seth; Dueger, Erica; Gross, Diane; Hauge, Siri; Hirve, Siddhivinayak; Jorgensen, Pernille; Katz, Mark A; Mafi, Ali; Malik, Mamunur; McCarron, Margaret; Meerhoff, Tamara; Mori, Yuichiro; Mott, Joshua; Olivera, Maria Teresa da Costa; Ortiz, Justin R; Palekar, Rakhee; Rebelo-de-Andrade, Helena; Soetens, Loes; Yahaya, Ali Ahmed; Zhang, Wenqing; Vandemaele, Katelijn

    2018-01-01

    The formulation of accurate clinical case definitions is an integral part of an effective process of public health surveillance. Although such definitions should, ideally, be based on a standardized and fixed collection of defining criteria, they often require revision to reflect new knowledge of

  6. Course and Outcome of Bacteremia Due to Staphylococcus Aureus: Evaluation of Different Clinical Case Definitions

    NARCIS (Netherlands)

    S. Lautenschlager (Stephan); C. Herzog (Christian); W. Zimmerli (Werner)

    1993-01-01

    textabstractIn a retrospective survey of patients hospitalized in the University Hospital of Basel, Switzerland, the course and outcome of 281 cases of true bacteremia due to Staphylococcus aureus over a 7-year period were analyzed. The main purpose was to evaluate different case definitions. In 78%

  7. Signal validation and failure correction algorithms for PWR steam generator feedwater control

    International Nuclear Information System (INIS)

    Nasrallah, C.N.; Graham, K.F.

    1986-01-01

    A critical contributor to the reliability of a nuclear power plant is the reliability of the control systems which maintain plant operating parameters within desired limits. The most difficult system to control in a PWR nuclear power plant and the one which causes the most reactor trips is the control of the feedwater flow to the steam generators. The level in the steam generator must be held within relatively narrow limits, with reactor trips set for both too high and too low a level. The steam generator level is inherently unstable in that it is an open integrator of feedwater flow steam flow mismatch. The steam generator feedwater control system relies on sensed variables in order to generate the appropriate feedwater valve control signal. In current systems, each of these sensed variables comes from a single sensor which may be a separate control sensor or one of the redundant protection sensors that is manually selected by the operator. In case this single signal is false, either due to sensor malfunction or due to a test signal being substituted during periodic test and maintenance, the control system will generate a wrong control signal to the feedwater control valve. This will initiate a steam generator level upset. The solution to this problem is for the control system to sense a given variable with more than one redundant sensor. Normally there are three or four sensors for each variable monitored by the reactor protection system. The techniques discussed allow the control system to compare these redundant sensor signals and generate a validated signal for each measured variable that is insensitive to false signals

  8. Acute respiratory infection case definitions for young children: a systematic review of community-based epidemiologic studies in South Asia.

    Science.gov (United States)

    Roth, Daniel E; Gaffey, Michelle F; Smith-Romero, Evelyn; Fitzpatrick, Tiffany; Morris, Shaun K

    2015-12-01

    To explore the variability in childhood acute respiratory infection case definitions for research in low-income settings where there is limited access to laboratory or radiologic investigations. We conducted a systematic review of community-based, longitudinal studies in South Asia published from January 1990 to August 2013, in which childhood acute respiratory infection outcomes were reported. Case definitions were classified by their label (e.g. pneumonia, acute lower respiratory infection) and clinical content 'signatures' (array of clinical features that would be always present, conditionally present or always absent among cases). Case definition heterogeneity was primarily assessed by the number of unique case definitions overall and by label. We also compared case definition-specific acute respiratory infection incidence rates for studies reporting incidence rates for multiple case definitions. In 56 eligible studies, we found 124 acute respiratory infection case definitions. Of 90 case definitions for which clinical content was explicitly defined, 66 (73%) were unique. There was a high degree of content heterogeneity among case definitions with the same label, and some content signatures were assigned multiple labels. Within studies for which incidence rates were reported for multiple case definitions, variation in content was always associated with a change in incidence rate, even when the content differed by a single clinical feature. There has been a wide variability in case definition label and content combinations to define acute upper and lower respiratory infections in children in community-based studies in South Asia over the past two decades. These inconsistencies have important implications for the synthesis and translation of knowledge regarding the prevention and treatment of childhood acute respiratory infection. © 2015 John Wiley & Sons Ltd.

  9. Development and Validation of Case-Finding Algorithms for the Identification of Patients with ANCA-Associated Vasculitis in Large Healthcare Administrative Databases

    Science.gov (United States)

    Sreih, Antoine G.; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D.; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A.

    2016-01-01

    Purpose To develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener’s, GPA), microscopic polyangiitis (MPA), and eosinophilic granulomatosis with polyangiitis (Churg-Strauss, EGPA). Methods 250 patients per disease were randomly selected from 2 large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). 16 case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody (ANCA) type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. Results An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the following diagnoses: alveolar hemorrhage, interstitial lung disease, glomerulonephritis, acute or chronic kidney disease, the encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the ANCA type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA respectively. Conclusion Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. PMID:27804171

  10. Natural history of benign prostatic hyperplasia: Appropriate case definition and estimation of its prevalence in the community

    NARCIS (Netherlands)

    J.L.H.R. Bosch (Ruud); W.C.J. Hop (Wim); W.J. Kirkels (Wim); F.H. Schröder (Fritz)

    1995-01-01

    textabstractThere is no consensus about a case definition of benign prostatic hyperplasia (BPH). In the present study, BPH prevalence rates were determined using various case definitions based on a combination of clinical parameters used to describe the properties of BPH: symptoms of prostatism,

  11. Operationalization and Validation of the Stopping Elderly Accidents, Deaths, and Injuries (STEADI) Fall Risk Algorithm in a Nationally Representative Sample

    Science.gov (United States)

    Lohman, Matthew C.; Crow, Rebecca S.; DiMilia, Peter R.; Nicklett, Emily J.; Bruce, Martha L.; Batsis, John A.

    2017-01-01

    Background Preventing falls and fall-related injuries among older adults is a public health priority. The Stopping Elderly Accidents, Deaths, and Injuries (STEADI) tool was developed to promote fall risk screening and encourage coordination between clinical and community-based fall prevention resources; however, little is known about the tool’s predictive validity or adaptability to survey data. Methods Data from five annual rounds (2011–2015) of the National Health and Aging Trends Study (NHATS), a representative cohort of adults age 65 and older in the US. Analytic sample respondents (n=7,392) were categorized at baseline as having low, moderate, or high fall risk according to the STEADI algorithm adapted for use with NHATS data. Logistic mixed-effects regression was used to estimate the association between baseline fall risk and subsequent falls and mortality. Analyses incorporated complex sampling and weighting elements to permit inferences at a national level. Results Participants classified as having moderate and high fall risk had 2.62 (95% CI: 2.29, 2.99) and 4.76 (95% CI: 3.51, 6.47) times greater odds of falling during follow-up compared to those with low risk, respectively, controlling for sociodemographic and health related risk factors for falls. High fall risk was also associated with greater likelihood of falling multiple times annually but not with greater risk of mortality. Conclusion The adapted STEADI clinical fall risk screening tool is a valid measure for predicting future fall risk using survey cohort data. Further efforts to standardize screening for fall risk and to coordinate between clinical and community-based fall prevention initiatives are warranted. PMID:28947669

  12. Operationalisation and validation of the Stopping Elderly Accidents, Deaths, and Injuries (STEADI) fall risk algorithm in a nationally representative sample.

    Science.gov (United States)

    Lohman, Matthew C; Crow, Rebecca S; DiMilia, Peter R; Nicklett, Emily J; Bruce, Martha L; Batsis, John A

    2017-12-01

    Preventing falls and fall-related injuries among older adults is a public health priority. The Stopping Elderly Accidents, Deaths, and Injuries (STEADI) tool was developed to promote fall risk screening and encourage coordination between clinical and community-based fall prevention resources; however, little is known about the tool's predictive validity or adaptability to survey data. Data from five annual rounds (2011-2015) of the National Health and Aging Trends Study (NHATS), a representative cohort of adults age 65 years and older in the USA. Analytic sample respondents (n=7392) were categorised at baseline as having low, moderate or high fall risk according to the STEADI algorithm adapted for use with NHATS data. Logistic mixed-effects regression was used to estimate the association between baseline fall risk and subsequent falls and mortality. Analyses incorporated complex sampling and weighting elements to permit inferences at a national level. Participants classified as having moderate and high fall risk had 2.62 (95% CI 2.29 to 2.99) and 4.76 (95% CI 3.51 to 6.47) times greater odds of falling during follow-up compared with those with low risk, respectively, controlling for sociodemographic and health-related risk factors for falls. High fall risk was also associated with greater likelihood of falling multiple times annually but not with greater risk of mortality. The adapted STEADI clinical fall risk screening tool is a valid measure for predicting future fall risk using survey cohort data. Further efforts to standardise screening for fall risk and to coordinate between clinical and community-based fall prevention initiatives are warranted. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Validation of Cloud Parameters Derived from Geostationary Satellites, AVHRR, MODIS, and VIIRS Using SatCORPS Algorithms

    Science.gov (United States)

    Minnis, P.; Sun-Mack, S.; Bedka, K. M.; Yost, C. R.; Trepte, Q. Z.; Smith, W. L., Jr.; Painemal, D.; Chen, Y.; Palikonda, R.; Dong, X.; hide

    2016-01-01

    Validation is a key component of remote sensing that can take many different forms. The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) is applied to many different imager datasets including those from the geostationary satellites, Meteosat, Himiwari-8, INSAT-3D, GOES, and MTSAT, as well as from the low-Earth orbiting satellite imagers, MODIS, AVHRR, and VIIRS. While each of these imagers have similar sets of channels with wavelengths near 0.65, 3.7, 11, and 12 micrometers, many differences among them can lead to discrepancies in the retrievals. These differences include spatial resolution, spectral response functions, viewing conditions, and calibrations, among others. Even when analyzed with nearly identical algorithms, it is necessary, because of those discrepancies, to validate the results from each imager separately in order to assess the uncertainties in the individual parameters. This paper presents comparisons of various SatCORPS-retrieved cloud parameters with independent measurements and retrievals from a variety of instruments. These include surface and space-based lidar and radar data from CALIPSO and CloudSat, respectively, to assess the cloud fraction, height, base, optical depth, and ice water path; satellite and surface microwave radiometers to evaluate cloud liquid water path; surface-based radiometers to evaluate optical depth and effective particle size; and airborne in-situ data to evaluate ice water content, effective particle size, and other parameters. The results of comparisons are compared and contrasted and the factors influencing the differences are discussed.

  14. Derivation and Validation of a Biomarker-Based Clinical Algorithm to Rule Out Sepsis From Noninfectious Systemic Inflammatory Response Syndrome at Emergency Department Admission: A Multicenter Prospective Study.

    Science.gov (United States)

    Mearelli, Filippo; Fiotti, Nicola; Giansante, Carlo; Casarsa, Chiara; Orso, Daniele; De Helmersen, Marco; Altamura, Nicola; Ruscio, Maurizio; Castello, Luigi Mario; Colonetti, Efrem; Marino, Rossella; Barbati, Giulia; Bregnocchi, Andrea; Ronco, Claudio; Lupia, Enrico; Montrucchio, Giuseppe; Muiesan, Maria Lorenza; Di Somma, Salvatore; Avanzi, Gian Carlo; Biolo, Gianni

    2018-05-07

    To derive and validate a predictive algorithm integrating a nomogram-based prediction of the pretest probability of infection with a panel of serum biomarkers, which could robustly differentiate sepsis/septic shock from noninfectious systemic inflammatory response syndrome. Multicenter prospective study. At emergency department admission in five University hospitals. Nine-hundred forty-seven adults in inception cohort and 185 adults in validation cohort. None. A nomogram, including age, Sequential Organ Failure Assessment score, recent antimicrobial therapy, hyperthermia, leukocytosis, and high C-reactive protein values, was built in order to take data from 716 infected patients and 120 patients with noninfectious systemic inflammatory response syndrome to predict pretest probability of infection. Then, the best combination of procalcitonin, soluble phospholypase A2 group IIA, presepsin, soluble interleukin-2 receptor α, and soluble triggering receptor expressed on myeloid cell-1 was applied in order to categorize patients as "likely" or "unlikely" to be infected. The predictive algorithm required only procalcitonin backed up with soluble phospholypase A2 group IIA determined in 29% of the patients to rule out sepsis/septic shock with a negative predictive value of 93%. In a validation cohort of 158 patients, predictive algorithm reached 100% of negative predictive value requiring biomarker measurements in 18% of the population. We have developed and validated a high-performing, reproducible, and parsimonious algorithm to assist emergency department physicians in distinguishing sepsis/septic shock from noninfectious systemic inflammatory response syndrome.

  15. ALDF Data Retrieval Algorithms for Validating the Optical Transient Detector (OTD) and the Lightning Imaging Sensor (LIS)

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    1997-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.

  16. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  17. Validation of an Arab name algorithm in the determination of Arab ancestry for use in health research.

    Science.gov (United States)

    El-Sayed, Abdulrahman M; Lauderdale, Diane S; Galea, Sandro

    2010-12-01

    Data about Arab-Americans, a growing ethnic minority, are not routinely collected in vital statistics, registry, or administrative data in the USA. The difficulty in identifying Arab-Americans using publicly available data sources is a barrier to health research about this group. Here, we validate an empirically based probabilistic Arab name algorithm (ANA) for identifying Arab-Americans in health research. We used data from all Michigan birth certificates between 2000 and 2005. Fathers' surnames and mothers' maiden names were coded as Arab or non-Arab according to the ANA. We calculated sensitivity, specificity, and positive (PPV) and negative predictive values (NPV) of Arab ethnicity inferred using the ANA as compared to self-reported Arab ancestry. Statewide, the ANA had a specificity of 98.9%, a sensitivity of 50.3%, a PPV of 57.0%, and an NPV of 98.6%. Both the false-positive and false-negative rates were higher among men than among women. As the concentration of Arab-Americans in a study locality increased, the ANA false-positive rate increased and false-negative rate decreased. The ANA is highly specific but only moderately sensitive as a means of detecting Arab ancestry. Future research should compare health characteristics among Arab-American populations defined by Arab ancestry and those defined by the ANA.

  18. Validation of an Arab names algorithm in the determination of Arab ancestry for use in health research

    Science.gov (United States)

    El-Sayed, Abdulrahman M.; Lauderdale, Diane S.; Galea, Sandro

    2010-01-01

    Objective Data about Arab-Americans, a growing ethnic minority, is not routinely collected in vital statistics, registry, or administrative data in the US. The difficulty in identifying Arab-Americans using publicly available data sources is a barrier to health research about this group. Here, we validate an empirically-based, probabilistic Arab name algorithm (ANA) for identifying Arab-Americans in health research. Design We used data from all Michigan birth certificates between 2000-2005. Fathers’ surnames and mothers’ maiden names were coded as Arab or non-Arab according to the ANA. We calculated sensitivity, specificity, and positive (PPV) and negative predictive values (NPV) of Arab ethnicity inferred using the ANA as compared to self-reported Arab ancestry. Results State-wide, the ANA had a specificity of 98.9%, a sensitivity of 50.3%, a PPV of 57.0%, and a NPV of 98.6%. Both the false positive and false negative rates were higher among men than among women. As the concentration of Arab-Americans in a study locality increased, the ANA false positive rate increased and false-negative rate decreased. Conclusion The ANA is highly specific but only moderately sensitive as a means of detecting Arab ancestry. Future research should compare health characteristics among Arab-American populations defined by Arab ancestry and those defined by the ANA. PMID:20845117

  19. Accuracy of Zika virus disease case definition during simultaneous Dengue and Chikungunya epidemics.

    Science.gov (United States)

    Braga, José Ueleres; Bressan, Clarisse; Dalvi, Ana Paula Razal; Calvet, Guilherme Amaral; Daumas, Regina Paiva; Rodrigues, Nadia; Wakimoto, Mayumi; Nogueira, Rita Maria Ribeiro; Nielsen-Saines, Karin; Brito, Carlos; Bispo de Filippis, Ana Maria; Brasil, Patrícia

    2017-01-01

    Zika is a new disease in the American continent and its surveillance is of utmost importance, especially because of its ability to cause neurological manifestations as Guillain-Barré syndrome and serious congenital malformations through vertical transmission. The detection of suspected cases by the surveillance system depends on the case definition adopted. As the laboratory diagnosis of Zika infection still relies on the use of expensive and complex molecular techniques with low sensitivity due to a narrow window of detection, most suspected cases are not confirmed by laboratory tests, mainly reserved for pregnant women and newborns. In this context, an accurate definition of a suspected Zika case is crucial in order for the surveillance system to gauge the magnitude of an epidemic. We evaluated the accuracy of various Zika case definitions in a scenario where Dengue and Chikungunya viruses co-circulate. Signs and symptoms that best discriminated PCR confirmed Zika from other laboratory confirmed febrile or exanthematic diseases were identified to propose and test predictive models for Zika infection based on these clinical features. Our derived score prediction model had the best performance because it demonstrated the highest sensitivity and specificity, 86·6% and 78·3%, respectively. This Zika case definition also had the highest values for auROC (0·903) and R2 (0·417), and the lowest Brier score 0·096. In areas where multiple arboviruses circulate, the presence of rash with pruritus or conjunctival hyperemia, without any other general clinical manifestations such as fever, petechia or anorexia is the best Zika case definition.

  20. Evaluation of the WHO clinical case definition of AIDS among children in India.

    Science.gov (United States)

    Gurprit, Grover; Tripti, Pensi; Gadpayle, A K; Tanushree, Banerjee

    2008-03-01

    The need of a clinical case definition (CCD) for Acquired Immunodeficiency Syndrome (AIDS) was felt by public health agencies to monitor diseases resulting from human immunodeficiency virus (HIV) infection. To test the statistical significance of the existing World Health Organization (WHO) CCD for the diagnosis of AIDS in areas where diagnostic resources are limited in India, a prospective study was conducted in the Paediatrics department at Dr. Ram Manohar Lohia Hospital, New Delhi. 360 cases between 18 months-12 years of age satisfying WHO case definitions of AIDS were included in the study group. Informed consent was taken from the parents. The serum of patients was subjected to ELISA to conform the diagnosis of HIV infection. Our study detected 16.66% (60) of HIV prevalence in children visiting paediatrics outpatient clinic. 20% cases manifested 3 major and 2 minor signs. This definition had a sensitivity of 73.33%, specificity of 90.66%, positive predictive value (PPV) of 61.11% and negative predictive value (NPV) of 94.44%. On using stepwise logistic regression analysis weight loss, chronic fever > 1 month and total lymphocyte count of less than 1500 cells/mm3 emerged as important predictors. Cases showing 2 major and 2 minor signs were 86 (23.89%) with a sensitivity and specificity of 86.66% and 88.66% respectively. Based on these findings, we propose a clinical case definition based on 13 clinical signs and symptoms for paediatric AIDS in India with better sensitivity and PPV than the WHO case definition but with almost similar specificity. Thus multicentric studies are further required to modify these criteria in Indian set up.

  1. Accuracy of Zika virus disease case definition during simultaneous Dengue and Chikungunya epidemics.

    Directory of Open Access Journals (Sweden)

    José Ueleres Braga

    Full Text Available Zika is a new disease in the American continent and its surveillance is of utmost importance, especially because of its ability to cause neurological manifestations as Guillain-Barré syndrome and serious congenital malformations through vertical transmission. The detection of suspected cases by the surveillance system depends on the case definition adopted. As the laboratory diagnosis of Zika infection still relies on the use of expensive and complex molecular techniques with low sensitivity due to a narrow window of detection, most suspected cases are not confirmed by laboratory tests, mainly reserved for pregnant women and newborns. In this context, an accurate definition of a suspected Zika case is crucial in order for the surveillance system to gauge the magnitude of an epidemic.We evaluated the accuracy of various Zika case definitions in a scenario where Dengue and Chikungunya viruses co-circulate. Signs and symptoms that best discriminated PCR confirmed Zika from other laboratory confirmed febrile or exanthematic diseases were identified to propose and test predictive models for Zika infection based on these clinical features.Our derived score prediction model had the best performance because it demonstrated the highest sensitivity and specificity, 86·6% and 78·3%, respectively. This Zika case definition also had the highest values for auROC (0·903 and R2 (0·417, and the lowest Brier score 0·096.In areas where multiple arboviruses circulate, the presence of rash with pruritus or conjunctival hyperemia, without any other general clinical manifestations such as fever, petechia or anorexia is the best Zika case definition.

  2. Enhancing case definitions for surveillance of human monkeypox in the Democratic Republic of Congo.

    Directory of Open Access Journals (Sweden)

    Lynda Osadebe

    2017-09-01

    Full Text Available Human monkeypox (MPX occurs at appreciable rates in the Democratic Republic of Congo (DRC. Infection with varicella zoster virus (VZV has a similar presentation to that of MPX, and in areas where MPX is endemic these two illnesses are commonly mistaken. This study evaluated the diagnostic utility of two surveillance case definitions for MPX and specific clinical characteristics associated with laboratory-confirmed MPX cases.Data from a cohort of suspect MPX cases (identified by surveillance over the course of a 42 month period during 2009-2014 from DRC were used; real-time PCR diagnostic test results were used to establish MPX and VZV diagnoses. A total of 333 laboratory-confirmed MPX cases, 383 laboratory-confirmed VZV cases, and 36 cases that were determined to not be either MPX or VZV were included in the analyses. Significant (p<0.05 differences between laboratory-confirmed MPX and VZV cases were noted for several signs/symptoms including key rash characteristics. Both surveillance case definitions had high sensitivity and low specificities for individuals that had suspected MPX virus infections. Using 12 signs/symptoms with high sensitivity and/or specificity values, a receiver operator characteristic analysis showed that models for MPX cases that had the presence of 'fever before rash' plus at least 7 or 8 of the 12 signs/symptoms demonstrated a more balanced performance between sensitivity and specificity.Laboratory-confirmed MPX and VZV cases presented with many of the same signs and symptoms, and the analysis here emphasized the utility of including 12 specific signs/symptoms when investigating MPX cases. In order to document and detect endemic human MPX cases, a surveillance case definition with more specificity is needed for accurate case detection. In the absence of a more specific case definition, continued emphasis on confirmatory laboratory-based diagnostics is warranted.

  3. Evaluation of the Components of the North Carolina Syndromic Surveillance System Heat Syndrome Case Definition.

    Science.gov (United States)

    Harduar Morano, Laurel; Waller, Anna E

    To improve heat-related illness surveillance, we evaluated and refined North Carolina's heat syndrome case definition. We analyzed North Carolina emergency department (ED) visits during 2012-2014. We evaluated the current heat syndrome case definition (ie, keywords in chief complaint/triage notes or International Classification of Diseases, Ninth Revision, Clinical Modification [ ICD-9-CM] codes) and additional heat-related inclusion and exclusion keywords. We calculated the positive predictive value and sensitivity of keyword-identified ED visits and manually reviewed ED visits to identify true positives and false positives. The current heat syndrome case definition identified 8928 ED visits; additional inclusion keywords identified another 598 ED visits. Of 4006 keyword-identified ED visits, 3216 (80.3%) were captured by 4 phrases: "heat ex" (n = 1674, 41.8%), "overheat" (n = 646, 16.1%), "too hot" (n = 594, 14.8%), and "heatstroke" (n = 302, 7.5%). Among the 267 ED visits identified by keyword only, a burn diagnosis or the following keywords resulted in a false-positive rate >95%: "burn," "grease," "liquid," "oil," "radiator," "antifreeze," "hot tub," "hot spring," and "sauna." After applying the revised inclusion and exclusion criteria, we identified 9132 heat-related ED visits: 2157 by keyword only, 5493 by ICD-9-CM code only, and 1482 by both (sensitivity = 27.0%, positive predictive value = 40.7%). Cases identified by keywords were strongly correlated with cases identified by ICD-9-CM codes (rho = .94, P definition through the use of additional inclusion and exclusion criteria substantially improved the accuracy of the surveillance system. Other jurisdictions may benefit from refining their heat syndrome case definition.

  4. Administrative Algorithms to identify Avascular necrosis of bone among patients undergoing upper or lower extremity magnetic resonance imaging: a validation study.

    Science.gov (United States)

    Barbhaiya, Medha; Dong, Yan; Sparks, Jeffrey A; Losina, Elena; Costenbader, Karen H; Katz, Jeffrey N

    2017-06-19

    Studies of the epidemiology and outcomes of avascular necrosis (AVN) require accurate case-finding methods. The aim of this study was to evaluate performance characteristics of a claims-based algorithm designed to identify AVN cases in administrative data. Using a centralized patient registry from a US academic medical center, we identified all adults aged ≥18 years who underwent magnetic resonance imaging (MRI) of an upper/lower extremity joint during the 1.5 year study period. A radiologist report confirming AVN on MRI served as the gold standard. We examined the sensitivity, specificity, positive predictive value (PPV) and positive likelihood ratio (LR + ) of four algorithms (A-D) using International Classification of Diseases, 9th edition (ICD-9) codes for AVN. The algorithms ranged from least stringent (Algorithm A, requiring ≥1 ICD-9 code for AVN [733.4X]) to most stringent (Algorithm D, requiring ≥3 ICD-9 codes, each at least 30 days apart). Among 8200 patients who underwent MRI, 83 (1.0% [95% CI 0.78-1.22]) had AVN by gold standard. Algorithm A yielded the highest sensitivity (81.9%, 95% CI 72.0-89.5), with PPV of 66.0% (95% CI 56.0-75.1). The PPV of algorithm D increased to 82.2% (95% CI 67.9-92.0), although sensitivity decreased to 44.6% (95% CI 33.7-55.9). All four algorithms had specificities >99%. An algorithm that uses a single billing code to screen for AVN among those who had MRI has the highest sensitivity and is best suited for studies in which further medical record review confirming AVN is feasible. Algorithms using multiple billing codes are recommended for use in administrative databases when further AVN validation is not feasible.

  5. Puffy skin disease (PSD) in rainbow trout, Oncorhynchus mykiss (Walbaum): a case definition.

    Science.gov (United States)

    Maddocks, C E; Nolan, E T; Feist, S W; Crumlish, M; Richards, R H; Williams, C F

    2015-07-01

    Puffy skin disease (PSD) is a disease that causes skin pathology in rainbow trout, Oncorhynchus mykiss (Walbaum). Incidence of PSD in UK fish farms and fisheries has increased sharply in the last decade, with growing concern from both industry sectors. This paper provides the first comprehensive case definition of PSD, combining clinical and pathological observations of diseased rainbow trout from both fish farms and fisheries. The defining features of PSD, as summarized in the case definition, were focal lateral flank skin lesions that appeared as cutaneous swelling with pigment loss and petechiae. These were associated with lethargy, poor body condition, inappetance and low level mortality. Epidermal hyperplasia and spongiosis, oedema of the dermis stratum spongiosum and a mild diffuse inflammatory cellularity were typical in histopathology of skin. A specific pathogen or aetiology was not identified. Prevalence and severity of skin lesions was greatest during late summer and autumn, with the highest prevalence being 95%. Atypical lesions seen in winter and spring were suggestive of clinical resolution. PSD holds important implications for both trout aquaculture and still water trout fisheries. This case definition will aid future diagnosis, help avoid confusion with other skin conditions and promote prompt and consistent reporting. © 2014 John Wiley & Sons Ltd.

  6. Measuring elimination of podoconiosis, endemicity classifications, case definition and targets: an international Delphi exercise.

    Science.gov (United States)

    Deribe, Kebede; Wanji, Samuel; Shafi, Oumer; Muheki Tukahebwa, Edridah; Umulisa, Irenee; Davey, Gail

    2015-09-01

    Podoconiosis is one of the major causes of lymphoedema in the tropics. Nonetheless, currently there are no endemicity classifications or elimination targets to monitor the effects of interventions. This study aimed at establishing case definitions and indicators that can be used to assess endemicity, elimination and clinical outcomes of podoconiosis. This paper describes the result of a Delphi technique used among 28 experts. A questionnaire outlining possible case definitions, endemicity classifications, elimination targets and clinical outcomes was developed. The questionnaire was distributed to experts working on podoconiosis and other neglected tropical diseases in two rounds. The experts rated the importance of case definitions, endemic classifications, elimination targets and the clinical outcome measures. Median and mode were used to describe the central tendency of expert responses. The coefficient of variation was used to describe the dispersals of expert responses. Consensus on definitions and indicators for assessing endemicity, elimination and clinical outcomes of podoconiosis directed at policy makers and health workers was achieved following the two rounds of Delphi approach among the experts. Based on the two Delphi rounds we discuss potential indicators and endemicity classification of this disabling disease, and the ongoing challenges to its elimination in countries with the highest prevalence. Consensus will help to increase effectiveness of podoconiosis elimination efforts and ensure comparability of outcome data. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.

  7. Calibration and Validation Parameter of Hydrologic Model HEC-HMS using Particle Swarm Optimization Algorithms – Single Objective

    Directory of Open Access Journals (Sweden)

    R. Garmeh

    2016-02-01

    model that simulates both wet and dry weatherbehavior.Programming of HEC –HMS has been done by MATLAB and techniques such as elite mutation and creating confusion have been used in order to strengthen the algorithm and improve the results. The event-based HEC-HMS model simulatesthe precipitation-runoff process for each set of parameter values generated by PSO. Turbulentand elitism with mutation are also employed to deal with PSO premature convergence. The integrated PSO-HMS model is tested on the Kardeh dam basin located in the Khorasan Razavi province. Results and Discussion: Input parameters of hydrologic models are seldomknown with certainty. Therefore, they are not capable ofdescribing the exact hydrologic processes. Input data andstructural uncertainties related to scale and approximationsin system processes are different sources of uncertainty thatmake it difficult to model exact hydrologic phenomena.In automatic calibration, the parameter values dependon the objective function of the search or optimization algorithm.In characterizing a runoff hydrograph, threecharacteristics of time-to-peak, peak of discharge and totalrunoff volume are of the most importance. It is thereforeimportant that we simulate and observe hydrographs matchas much as possible in terms of those characteristics. Calibration was carried out in single objective cases. Model calibration in single-objective approach with regard to the objective function in the event of NASH and RMSE were conducted separately.The results indicated that the capability of the model was calibrated to an acceptable level of events. Continuing calibration results were evaluated by four different criteria.Finally, to validate the model parameters with those obtained from the calibration, tests perfomed indicated poor results. Although, based on the calibration and verification of individual events one event remains, suggesting set is a possible parameter. Conclusion: All events were evaluated by validations and the

  8. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  9. Mapping the EORTC QLQ-C30 onto the EQ-5D-3L: assessing the external validity of existing mapping algorithms.

    Science.gov (United States)

    Doble, Brett; Lorgelly, Paula

    2016-04-01

    To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.

  10. Validation of Kalman Filter alignment algorithm with cosmic-ray data using a CMS silicon strip tracker endcap

    CERN Document Server

    Sprenger, D; Adolphi, R; Brauer, R; Feld, L; Klein, K; Ostaptchuk, A; Schael, S; Wittmer, B

    2010-01-01

    A Kalman Filter alignment algorithm has been applied to cosmic-ray data. We discuss the alignment algorithm and an experiment-independent implementation including outlier rejection and treatment of weakly determined parameters. Using this implementation, the algorithm has been applied to data recorded with one CMS silicon tracker endcap. Results are compared to both photogrammetry measurements and data obtained from a dedicated hardware alignment system, and good agreement is observed.

  11. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  12. Natural history of benign prostatic hyperplasia: Appropriate case definition and estimation of its prevalence in the community

    OpenAIRE

    Bosch, Ruud; Hop, Wim; Kirkels, Wim; Schröder, Fritz

    1995-01-01

    textabstractThere is no consensus about a case definition of benign prostatic hyperplasia (BPH). In the present study, BPH prevalence rates were determined using various case definitions based on a combination of clinical parameters used to describe the properties of BPH: symptoms of prostatism, prostate volume increase, and bladder outflow obstruction. The aim of this study—in a community-based population of 502 men (55–74 years of age) without prostate cancer—was to determine the relative i...

  13. Diagnosis of measles by clinical case definition in dengue-endemic areas: implications for measles surveillance and control.

    OpenAIRE

    Dietz, V. J.; Nieburg, P.; Gubler, D. J.; Gomez, I.

    1992-01-01

    In many countries, measles surveillance relies heavily on the use of a standard clinical case definition; however, the clinical signs and symptoms of measles are similar to those of dengue. For example, during 1985, in Puerto Rico, 22 (23%) of 94 cases of illnesses with rashes that met the measles clinical case definition were serologically confirmed as measles, but 32 (34%) others were serologically confirmed as dengue. Retrospective analysis at the San Juan Laboratories of the Centers for D...

  14. Evaluation of an expanded case definition for vaccine-modified measles in a school outbreak in South Korea in 2010.

    Science.gov (United States)

    Choe, Young June; Hu, Jae Kyung; Song, Kyung Min; Cho, Heeyeon; Yoon, Hee Sook; Kim, Seung Tae; Lee, Han Jung; Kim, Kisoon; Bae, Geun-Ryang; Lee, Jong-Koo

    2012-01-01

    In this study, we have described the clinical characteristics of vaccine-modified measles to assess the performance of an expanded case definition in a school outbreak that occurred in 2010. The sensitivity, specificity, and the positive and negative predictive values were evaluated. Among 74 cases of vaccine-modified measles, 47 (64%) met the original case definition. Fever and rash were observed in 73% (54/74); fever was the most common (96%, 71/74) presenting symptom, and rash was noted in 77% (57/74) of the cases. The original case definition showed an overall sensitivity of 63.5% and a specificity of 100.0%. The expanded case definition combining fever and rash showed a higher sensitivity (72.9%) but a lower specificity (88.2%) than the original. The presence of fever and one or more of cough, coryza, or conjunctivitis scored the highest sensitivity among the combinations of signs and symptoms (77.0%), but scored the lowest specificity (52.9%). The expanded case definition was sensitive in identifying suspected cases of vaccine-modified measles. We suggest using this expanded definition for outbreak investigation in a closed community, and consider further discussions on expanding the case definition of measles for routine surveillance in South Korea.

  15. The 10/66 Dementia Research Group's fully operationalised DSM-IV dementia computerized diagnostic algorithm, compared with the 10/66 dementia algorithm and a clinician diagnosis: a population validation study

    Directory of Open Access Journals (Sweden)

    Krishnamoorthy ES

    2008-06-01

    Full Text Available Abstract Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study. Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder.

  16. SEBAL-A: A Remote Sensing ET Algorithm that Accounts for Advection with Limited Data. Part I: Development and Validation

    Directory of Open Access Journals (Sweden)

    Mcebisi Mkhwanazi

    2015-11-01

    Full Text Available The Surface Energy Balance Algorithm for Land (SEBAL is one of the remote sensing (RS models that are increasingly being used to determine evapotranspiration (ET. SEBAL is a widely used model, mainly due to the fact that it requires minimum weather data, and also no prior knowledge of surface characteristics is needed. However, it has been observed that it underestimates ET under advective conditions due to its disregard of advection as another source of energy available for evaporation. A modified SEBAL model was therefore developed in this study. An advection component, which is absent in the original SEBAL, was introduced such that the energy available for evapotranspiration was a sum of net radiation and advected heat energy. The improved SEBAL model was termed SEBAL-Advection or SEBAL-A. An important aspect of the improved model is the estimation of advected energy using minimal weather data. While other RS models would require hourly weather data to be able to account for advection (e.g., METRIC, SEBAL-A only requires daily averages of limited weather data, making it appropriate even in areas where weather data at short time steps may not be available. In this study, firstly, the original SEBAL model was evaluated under advective and non-advective conditions near Rocky Ford in southeastern Colorado, a semi-arid area where afternoon advection is common occurrence. The SEBAL model was found to incur large errors when there was advection (which was indicated by higher wind speed and warm and dry air. SEBAL-A was then developed and validated in the same area under standard surface conditions, which were described as healthy alfalfa with height of 40–60 cm, without water-stress. ET values estimated using the original and modified SEBAL were compared to large weighing lysimeter-measured ET values. When the SEBAL ET was compared to SEBAL-A ET values, the latter showed improved performance, with the ET Mean Bias Error (MBE reduced from −17

  17. Application and validation of case-finding algorithms for identifying individuals with human immunodeficiency virus from administrative data in British Columbia, Canada.

    Directory of Open Access Journals (Sweden)

    Bohdan Nosyk

    Full Text Available To define a population-level cohort of individuals infected with the human immunodeficiency virus (HIV in the province of British Columbia from available registries and administrative datasets using a validated case-finding algorithm.Individuals were identified for possible cohort inclusion from the BC Centre for Excellence in HIV/AIDS (CfE drug treatment program (antiretroviral therapy and laboratory testing datasets (plasma viral load (pVL and CD4 diagnostic test results, the BC Centre for Disease Control (CDC provincial HIV surveillance database (positive HIV tests, as well as databases held by the BC Ministry of Health (MoH; the Discharge Abstract Database (hospitalizations, the Medical Services Plan (physician billing and PharmaNet databases (additional HIV-related medications. A validated case-finding algorithm was applied to distinguish true HIV cases from those likely to have been misclassified. The sensitivity of the algorithms was assessed as the proportion of confirmed cases (those with records in the CfE, CDC and MoH databases positively identified by each algorithm. A priori hypotheses were generated and tested to verify excluded cases.A total of 25,673 individuals were identified as having at least one HIV-related health record. Among 9,454 unconfirmed cases, the selected case-finding algorithm identified 849 individuals believed to be HIV-positive. The sensitivity of this algorithm among confirmed cases was 88%. Those excluded from the cohort were more likely to be female (44.4% vs. 22.5%; p<0.01, had a lower mortality rate (2.18 per 100 person years (100PY vs. 3.14/100PY; p<0.01, and had lower median rates of health service utilization (days of medications dispensed: 9745/100PY vs. 10266/100PY; p<0.01; days of inpatient care: 29/100PY vs. 98/100PY; p<0.01; physician billings: 602/100PY vs. 2,056/100PY; p<0.01.The application of validated case-finding algorithms and subsequent hypothesis testing provided a strong framework for

  18. Remote Estimation of Chlorophyll-a in Inland Waters by a NIR-Red-Based Algorithm: Validation in Asian Lakes

    Directory of Open Access Journals (Sweden)

    Gongliang Yu

    2014-04-01

    Full Text Available Satellite remote sensing is a highly useful tool for monitoring chlorophyll-a concentration (Chl-a in water bodies. Remote sensing algorithms based on near-infrared-red (NIR-red wavelengths have demonstrated great potential for retrieving Chl-a in inland waters. This study tested the performance of a recently developed NIR-red based algorithm, SAMO-LUT (Semi-Analytical Model Optimizing and Look-Up Tables, using an extensive dataset collected from five Asian lakes. Results demonstrated that Chl-a retrieved by the SAMO-LUT algorithm was strongly correlated with measured Chl-a (R2 = 0.94, and the root-mean-square error (RMSE and normalized root-mean-square error (NRMS were 8.9 mg∙m−3 and 72.6%, respectively. However, the SAMO-LUT algorithm yielded large errors for sites where Chl-a was less than 10 mg∙m−3 (RMSE = 1.8 mg∙m−3 and NRMS = 217.9%. This was because differences in water-leaving radiances at the NIR-red wavelengths (i.e., 665 nm, 705 nm and 754 nm used in the SAMO-LUT were too small due to low concentrations of water constituents. Using a blue-green algorithm (OC4E instead of the SAMO-LUT for the waters with low constituent concentrations would have reduced the RMSE and NRMS to 1.0 mg∙m−3 and 16.0%, respectively. This indicates (1 the NIR-red algorithm does not work well when water constituent concentrations are relatively low; (2 different algorithms should be used in light of water constituent concentration; and thus (3 it is necessary to develop a classification method for selecting the appropriate algorithm.

  19. Uniform research case definition criteria differentiate tuberculous and bacterial meningitis in children.

    Science.gov (United States)

    Solomons, Regan S; Wessels, Marie; Visser, Douwe H; Donald, Peter R; Marais, Ben J; Schoeman, Johan F; van Furth, Anne M

    2014-12-01

    Tuberculous meningitis (TBM) research is hampered by low numbers of microbiologically confirmed TBM cases and the fact that they may represent a select part of the disease spectrum. A uniform TBM research case definition was developed to address these limitations, but its ability to differentiate TBM from bacterial meningitis has not been evaluated. We assessed all children treated for TBM from 1985 to 2005 at Tygerberg Children's Hospital, Cape Town, South Africa. For comparative purposes, a group of children with culture-confirmed bacterial meningitis, diagnosed between 2003 and 2009, was identified from the National Health Laboratory Service database. The performance of the proposed case definition was evaluated in culture-confirmed TBM and bacterial meningitis cases. Of 554 children treated for TBM, 66 (11.9%) were classified as "definite TBM," 408 (73.6%) as "probable TBM," and 72 (13.0%) as "possible TBM." "Probable TBM" criteria identified culture-confirmed TBM with a sensitivity of 86% and specificity of 100%; sensitivity was increased but specificity reduced when using "possible TBM" criteria (sensitivity 100%, specificity 56%). "Probable TBM" criteria accurately differentiated TBM from bacterial meningitis and could be considered for use in clinical trials; reduced sensitivity in children with early TBM (stage 1 disease) remains a concern. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Update of the Case Definitions for Population-Based Surveillance of Periodontitis

    Science.gov (United States)

    Eke, Paul I.; Page, Roy C.; Wei, Liang; Thornton-Evans, Gina; Genco, Robert J.

    2018-01-01

    Background This report adds a new definition for mild periodontitis that allows for better descriptions of the overall prevalence of periodontitis in populations. In 2007, the Centers for Disease Control and Prevention in partnership with the American Academy of Periodontology developed and reported standard case definitions for surveillance of moderate and severe periodontitis based on measurements of probing depth (PD) and clinical attachment loss (AL) at interproximal sites. However, combined cases of moderate and severe periodontitis are insufficient to determine the total prevalence of periodontitis in populations. Methods The authors proposed a definition for mild periodontitis as ≥2 interproximal sites with AL ≥3 mm and ≥2 interproximal sites with PD ≥4 mm (not on the same tooth) or one site with PD ≥5 mm. The effect of the proposed definition on the total burden of periodontitis was assessed in a convenience sample of 456 adults ≥35 years old and compared with other previously reported definitions for similar categories of periodontitis. Results Addition of mild periodontitis increases the total prevalence of periodontitis by ≈31% in this sample when compared with the prevalence of severe and moderate disease. Conclusion Total periodontitis using the case definitions in this study should be based on the sum of mild, moderate, and severe periodontitis. PMID:22420873

  1. Accuracy of clinical diagnosis versus the World Health Organization case definition in the Amoy Garden SARS cohort.

    Science.gov (United States)

    Wong, W N; Sek, Antonio C H; Lau, Rick F L; Li, K M; Leung, Joe K S; Tse, M L; Ng, Andy H W; Stenstrom, Robert

    2003-11-01

    To compare the diagnostic accuracy of emergency department (ED) physicians with the World Health Organization (WHO) case definition in a large community-based SARS (severe acute respiratory syndrome) cohort. This was a cohort study of all patients from Hong Kong's Amoy Garden complex who presented to an ED SARS screening clinic during a 2-month outbreak. Clinical findings and WHO case definition criteria were recorded, along with ED diagnoses. Final diagnoses were established independently based on relevant diagnostic tests performed after the ED visit. Emergency physician diagnostic accuracy was compared with that of the WHO SARS case definition. Sensitivity, specificity, predictive values and likelihood ratios were calculated using standard formulae. During the study period, 818 patients presented with SARS-like symptoms, including 205 confirmed SARS, 35 undetermined SARS and 578 non-SARS. Sensitivity, specificity and accuracy were 91%, 96% and 94% for ED clinical diagnosis, versus 42%, 86% and 75% for the WHO case definition. Positive likelihood ratios (LR+) were 21.1 for physician judgement and 3.1 for the WHO criteria. Negative likelihood ratios (LR-) were 0.10 for physician judgement and 0.67 for the WHO criteria, indicating that clinician judgement was a much more powerful predictor than the WHO criteria. Physician clinical judgement was more accurate than the WHO case definition. Reliance on the WHO case definition as a SARS screening tool may lead to an unacceptable rate of misdiagnosis. The SARS case definition must be revised if it is to be used as a screening tool in emergency departments and primary care settings.

  2. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    International Nuclear Information System (INIS)

    Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B

    2016-01-01

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH

  3. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH

  4. Kinect as a Tool for Gait Analysis: Validation of a Real-Time Joint Extraction Algorithm Working in Side View

    Science.gov (United States)

    Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio

    2015-01-01

    The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond, WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits. PMID:25594588

  5. Kinect as a Tool for Gait Analysis: Validation of a Real-Time Joint Extraction Algorithm Working in Side View

    Directory of Open Access Journals (Sweden)

    Enea Cippitelli

    2015-01-01

    Full Text Available The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond,WA, USA, 2013 and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013 Software Development Kits.

  6. External validation of the DHAKA score and comparison with the current IMCI algorithm for the assessment of dehydration in children with diarrhoea: a prospective cohort study.

    Science.gov (United States)

    Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Atika, Bita; Rege, Soham; Robertson, Sarah; Schmid, Christopher H; Alam, Nur H

    2016-10-01

    Dehydration due to diarrhoea is a leading cause of child death worldwide, yet no clinical tools for assessing dehydration have been validated in resource-limited settings. The Dehydration: Assessing Kids Accurately (DHAKA) score was derived for assessing dehydration in children with diarrhoea in a low-income country setting. In this study, we aimed to externally validate the DHAKA score in a new population of children and compare its accuracy and reliability to the current Integrated Management of Childhood Illness (IMCI) algorithm. DHAKA was a prospective cohort study done in children younger than 60 months presenting to the International Centre for Diarrhoeal Disease Research, Bangladesh, with acute diarrhoea (defined by WHO as three or more loose stools per day for less than 14 days). Local nurses assessed children and classified their dehydration status using both the DHAKA score and the IMCI algorithm. Serial weights were obtained and dehydration status was established by percentage weight change with rehydration. We did regression analyses to validate the DHAKA score and compared the accuracy and reliability of the DHAKA score and IMCI algorithm with receiver operator characteristic (ROC) curves and the weighted κ statistic. This study was registered with ClinicalTrials.gov, number NCT02007733. Between March 22, 2015, and May 15, 2015, 496 patients were included in our primary analyses. On the basis of our criterion standard, 242 (49%) of 496 children had no dehydration, 184 (37%) of 496 had some dehydration, and 70 (14%) of 496 had severe dehydration. In multivariable regression analyses, each 1-point increase in the DHAKA score predicted an increase of 0·6% in the percentage dehydration of the child and increased the odds of both some and severe dehydration by a factor of 1·4. Both the accuracy and reliability of the DHAKA score were significantly greater than those of the IMCI algorithm. The DHAKA score is the first clinical tool for assessing

  7. Experimental validation of improved 3D SBP positioning algorithm in PET applications using UW Phase II Board

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, L.S.; Bonifacio, D.A.B. [Institute of Radioprotection and Dosimetry, IRD/CNEN (Brazil); DeWitt, Don; Miyaoka, R.S. [Imaging Research Laboratory, IRL/UW (United States)

    2016-12-01

    Continuous scintillator-based detectors have been considered as a competitive and cheaper approach than highly pixelated discrete crystal positron emission tomography (PET) detectors, despite the need for algorithms to estimate 3D gamma interaction position. In this work, we report on the implementation of a positioning algorithm to estimate the 3D interaction position in a continuous crystal PET detector using a Field Programmable Gate Array (FPGA). The evaluated method is the Statistics-Based Processing (SBP) technique that requires light response function and event position characterization. An algorithm has been implemented using the Verilog language and evaluated using a data acquisition board that contains an Altera Stratix III FPGA. The 3D SBP algorithm was previously successfully implemented on a Stratix II FPGA using simulated data and a different module design. In this work, improvements were made to the FPGA coding of the 3D positioning algorithm, reducing the total memory usage to around 34%. Further the algorithm was evaluated using experimental data from a continuous miniature crystal element (cMiCE) detector module. Using our new implementation, average FWHM (Full Width at Half Maximum) for the whole block is 1.71±0.01 mm, 1.70±0.01 mm and 1.632±0.005 mm for x, y and z directions, respectively. Using a pipelined architecture, the FPGA is able to process 245,000 events per second for interactions inside of the central area of the detector that represents 64% of the total block area. The weighted average of the event rate by regional area (corner, border and central regions) is about 198,000 events per second. This event rate is greater than the maximum expected coincidence rate for any given detector module in future PET systems using the cMiCE detector design.

  8. Case definition for clinical and subclinical bacterial kidney disease (BKD) in Atlantic Salmon (Salmo salar L.) in New Brunswick, Canada.

    Science.gov (United States)

    Boerlage, A S; Stryhn, H; Sanchez, J; Hammell, K L

    2017-03-01

    Bacterial kidney disease (BKD) is considered an important cause of loss in salmon aquaculture in Atlantic Canada. Causative agent of BKD is the Gram-positive bacteria Renibacterium salmoninarum. Infected salmon are often asymptomatic (subclinical infection), and the disease is considered chronic. One of the challenges in quantifying information from farm production and health records is the application of a standardized case definition. Case definitions for farm-level and cage-level clinical and subclinical BKD were developed using retrospective longitudinal data from aquaculture practices in New Brunswick, Canada, combining (i) industry records of weekly production data including mortalities, (ii) field observations for BKD using reports of veterinarians and/or fish health technicians, (iii) diagnostic submissions and test results and (iv) treatments used to control BKD. Case definitions were evaluated using veterinarians' expert judgements as reference standard. Eighty-nine and 66% of sites and fish groups, respectively, were associated with BKD at least once. For BKD present (subclinical or clinical), sensitivity and specificity of the case definition were 75-100% varying between event, fish group, site cycle and level (site pen). For clinical BKD, sensitivities were 29-64% and specificities 91-100%. Industry data can be used to develop sensitive case definitions. © 2016 John Wiley & Sons Ltd.

  9. Maternal mortality in rural South Africa: the impact of case definition on levels and trends

    Directory of Open Access Journals (Sweden)

    Garenne M

    2013-08-01

    Full Text Available Michel Garenne,1–3 Kathleen Kahn,1,4,5 Mark A Collinson,1,4,5 F Xavier Gómez-Olivé,1,5 Stephen Tollman1,4,51MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt, School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa; 2Institut Pasteur, Epidémiologie des Maladies Emergentes, Paris, France; 3Institut de Recherche pour le Développement, UMI Résiliences, Centre Ile de France, Bondy, France; 4Centre for Global Health Research, Umeå University, Umeå, Sweden; 5INDEPTH Network, East Legon, Accra, GhanaBackground: Uncertainty in the levels of global maternal mortality reflects data deficiencies, as well as differences in methods and definitions. This study presents levels and trends in maternal mortality in Agincourt, a rural subdistrict of South Africa, under long-term health and sociodemographic surveillance.Methods: All deaths of women aged 15 years–49 years occurring in the study area between 1992 and 2010 were investigated, and causes of death were assessed by verbal autopsy. Two case definitions were used: “obstetrical” (direct causes, defined as deaths caused by conditions listed under O00-O95 in International Classification of Diseases-10; and “pregnancy-related deaths”, defined as any death occurring during the maternal risk period (pregnancy, delivery, 6 weeks postpartum, irrespective of cause.Results: The case definition had a major impact on levels and trends in maternal mortality. The obstetric mortality ratio averaged 185 per 100,000 live births over the period (60 deaths, whereas the pregnancy-related mortality ratio averaged 423 per 100,000 live births (137 deaths. Results from both calculations increased over the period, with a peak around 2006, followed by a decline coincident with the national roll-out of Prevention of Mother-to-Child Transmission of HIV and antiretroviral treatment programs. Mortality increase from direct causes was

  10. Validation of clinical testing for warfarin sensitivity: comparison of CYP2C9-VKORC1 genotyping assays and warfarin-dosing algorithms.

    Science.gov (United States)

    Langley, Michael R; Booker, Jessica K; Evans, James P; McLeod, Howard L; Weck, Karen E

    2009-05-01

    Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 -1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses.

  11. Top-of-atmosphere radiative fluxes - Validation of ERBE scanner inversion algorithm using Nimbus-7 ERB data

    Science.gov (United States)

    Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri

    1992-01-01

    The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.

  12. Implementation of an Evidence-Based and Content Validated Standardized Ostomy Algorithm Tool in Home Care: A Quality Improvement Project.

    Science.gov (United States)

    Bare, Kimberly; Drain, Jerri; Timko-Progar, Monica; Stallings, Bobbie; Smith, Kimberly; Ward, Naomi; Wright, Sandra

    Many nurses have limited experience with ostomy management. We sought to provide a standardized approach to ostomy education and management to support nurses in early identification of stomal and peristomal complications, pouching problems, and provide standardized solutions for managing ostomy care in general while improving utilization of formulary products. This article describes development and testing of an ostomy algorithm tool.

  13. Validation of the "smart" minimum FFR Algorithm in an unselected all comer population of patients with intermediate coronary stenoses.

    Science.gov (United States)

    Hennigan, Barry; Johnson, Nils; McClure, John; Corcoran, David; Watkins, Stuart; Berry, Colin; Oldroyd, Keith G

    2017-07-01

    Using data from a commercial pressure wire system (St. Jude Medical) we previously developed an automated "smart" algorithm to determine a reproducible value for minimum FFR (smFFR) and confirmed that it correlated very closely with measurements made off-line by experienced coronary physiology core laboratories. In this study we used the same "smart" minimum algorithm to analyze data derived from a different, commercial pressure wire system (Philips Volcano) and compared the values obtained to both operator-defined steady state FFR and the online automated minimum FFR reported by the pressure wire analyser. For this analysis, we used the data collected during the VERIFY 2 study (Hennigan et al. in Circ Cardiovasc Interv, doi: 10.1161/CIRCINTERVENTIONS.116.004016 ) in which we measured FFR in 257 intermediate coronary stenoses (mean DS 48%) in 197 patients. Maximal hyperaemia was induced using intravenous adenosine (140 mcg/kg/min). We recorded both the online minimum FFR generated by the analyser and the operator-reported steady state FFR. Subsequently, the raw pressure tracings were coded, anonymised and 256/257 were subjected to further off-line analysis using the smart minimum FFR (smFFR) algorithm. The operator-defined steady state FFR correlated well with smFFR: r = 0.988 (p 0.05 among methods were rare but in these cases the two automated algorithms almost always agreed with each other rather than with the operator-reported value. Within the VERIFY 2 dataset, experienced operators reported a similar FFR value to both an online automated minimum (Philips Volcano) and off-line "smart" minimum computer algorithm. Thus, treatment decisions and clinical studies using either method will produce nearly identical results.

  14. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  15. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    Science.gov (United States)

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  16. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW

  17. Improved numerical algorithm and experimental validation of a system thermal-hydraulic/CFD coupling method for multi-scale transient simulations of pool-type reactors

    International Nuclear Information System (INIS)

    Toti, A.; Vierendeels, J.; Belloni, F.

    2017-01-01

    Highlights: • A system thermal-hydraulic/CFD coupling methodology is proposed for high-fidelity transient flow analyses. • The method is based on domain decomposition and implicit numerical scheme. • A novel interface Quasi-Newton algorithm is implemented to improve stability and convergence rate. • Preliminary validation analyses on the TALL-3D experiment. - Abstract: The paper describes the development and validation of a coupling methodology between the best-estimate system thermal-hydraulic code RELAP5-3D and the CFD code FLUENT, conceived for high fidelity plant-scale safety analyses of pool-type reactors. The computational tool is developed to assess the impact of three-dimensional phenomena occurring in accidental transients such as loss of flow (LOF) in the research reactor MYRRHA, currently in the design phase at the Belgian Nuclear Research Centre, SCK• CEN. A partitioned, implicit domain decomposition coupling algorithm is implemented, in which the coupled domains exchange thermal-hydraulics variables at coupling boundary interfaces. Numerical stability and interface convergence rates are improved by a novel interface Quasi-Newton algorithm, which is compared in this paper with previously tested numerical schemes. The developed computational method has been assessed for validation purposes against the experiment performed at the test facility TALL-3D, operated by the Royal Institute of Technology (KTH) in Sweden. This paper details the results of the simulation of a loss of forced convection test, showing the capability of the developed methodology to predict transients influenced by local three-dimensional phenomena.

  18. The Oral HIV/AIDS Research Alliance: updated case definitions of oral disease endpoints.

    Science.gov (United States)

    Shiboski, C H; Patton, L L; Webster-Cyriaque, J Y; Greenspan, D; Traboulsi, R S; Ghannoum, M; Jurevic, R; Phelan, J A; Reznik, D; Greenspan, J S

    2009-07-01

    The Oral HIV/AIDS Research Alliance (OHARA) is part of the AIDS Clinical Trials Group (ACTG), the largest HIV clinical trials organization in the world. Its main objective is to investigate oral complications associated with HIV/AIDS as the epidemic is evolving, in particular, the effects of antiretrovirals on oral mucosal lesion development and associated fungal and viral pathogens. The OHARA infrastructure comprises: the Epidemiologic Research Unit (at the University of California San Francisco), the Medical Mycology Unit (at Case Western Reserve University) and the Virology/Specimen Banking Unit (at the University of North Carolina). The team includes dentists, physicians, virologists, mycologists, immunologists, epidemiologists and statisticians. Observational studies and clinical trials are being implemented at ACTG-affiliated sites in the US and resource-poor countries. Many studies have shared end-points, which include oral diseases known to be associated with HIV/AIDS measured by trained and calibrated ACTG study nurses. In preparation for future protocols, we have updated existing diagnostic criteria of the oral manifestations of HIV published in 1992 and 1993. The proposed case definitions are designed to be used in large-scale epidemiologic studies and clinical trials, in both US and resource-poor settings, where diagnoses may be made by non-dental healthcare providers. The objective of this article is to present updated case definitions for HIV-related oral diseases that will be used to measure standardized clinical end-points in OHARA studies, and that can be used by any investigator outside of OHARA/ACTG conducting clinical research that pertains to these end-points.

  19. Design of a correlated validated CFD and genetic algorithm model for optimized sensors placement for indoor air quality monitoring

    Science.gov (United States)

    Mousavi, Monireh Sadat; Ashrafi, Khosro; Motlagh, Majid Shafie Pour; Niksokhan, Mohhamad Hosein; Vosoughifar, HamidReza

    2018-02-01

    In this study, coupled method for simulation of flow pattern based on computational methods for fluid dynamics with optimization technique using genetic algorithms is presented to determine the optimal location and number of sensors in an enclosed residential complex parking in Tehran. The main objective of this research is costs reduction and maximum coverage with regard to distribution of existing concentrations in different scenarios. In this study, considering all the different scenarios for simulation of pollution distribution using CFD simulations has been challenging due to extent of parking and number of cars available. To solve this problem, some scenarios have been selected based on random method. Then, maximum concentrations of scenarios are chosen for performing optimization. CFD simulation outputs are inserted as input in the optimization model using genetic algorithm. The obtained results stated optimal number and location of sensors.

  20. Shuffling cross-validation-bee algorithm as a new descriptor selection method for retention studies of pesticides in biopartitioning micellar chromatography.

    Science.gov (United States)

    Zarei, Kobra; Atabati, Morteza; Ahmadi, Monire

    2017-05-04

    Bee algorithm (BA) is an optimization algorithm inspired by the natural foraging behaviour of honey bees to find the optimal solution which can be proposed to feature selection. In this paper, shuffling cross-validation-BA (CV-BA) was applied to select the best descriptors that could describe the retention factor (log k) in the biopartitioning micellar chromatography (BMC) of 79 heterogeneous pesticides. Six descriptors were obtained using BA and then the selected descriptors were applied for model development using multiple linear regression (MLR). The descriptor selection was also performed using stepwise, genetic algorithm and simulated annealing methods and MLR was applied to model development and then the results were compared with those obtained from shuffling CV-BA. The results showed that shuffling CV-BA can be applied as a powerful descriptor selection method. Support vector machine (SVM) was also applied for model development using six selected descriptors by BA. The obtained statistical results using SVM were better than those obtained using MLR, as the root mean square error (RMSE) and correlation coefficient (R) for whole data set (training and test), using shuffling CV-BA-MLR, were obtained as 0.1863 and 0.9426, respectively, while these amounts for the shuffling CV-BA-SVM method were obtained as 0.0704 and 0.9922, respectively.

  1. Validation of a clinical practice-based algorithm for the diagnosis of autosomal recessive cerebellar ataxias based on NGS identified cases.

    Science.gov (United States)

    Mallaret, Martial; Renaud, Mathilde; Redin, Claire; Drouot, Nathalie; Muller, Jean; Severac, Francois; Mandel, Jean Louis; Hamza, Wahiba; Benhassine, Traki; Ali-Pacha, Lamia; Tazir, Meriem; Durr, Alexandra; Monin, Marie-Lorraine; Mignot, Cyril; Charles, Perrine; Van Maldergem, Lionel; Chamard, Ludivine; Thauvin-Robinet, Christel; Laugel, Vincent; Burglen, Lydie; Calvas, Patrick; Fleury, Marie-Céline; Tranchant, Christine; Anheim, Mathieu; Koenig, Michel

    2016-07-01

    Establishing a molecular diagnosis of autosomal recessive cerebellar ataxias (ARCA) is challenging due to phenotype and genotype heterogeneity. We report the validation of a previously published clinical practice-based algorithm to diagnose ARCA. Two assessors performed a blind analysis to determine the most probable mutated gene based on comprehensive clinical and paraclinical data, without knowing the molecular diagnosis of 23 patients diagnosed by targeted capture of 57 ataxia genes and high-throughput sequencing coming from a 145 patients series. The correct gene was predicted in 61 and 78 % of the cases by the two assessors, respectively. There was a high inter-rater agreement [K = 0.85 (0.55-0.98) p < 0.001] confirming the algorithm's reproducibility. Phenotyping patients with proper clinical examination, imaging, biochemical investigations and nerve conduction studies remain crucial for the guidance of molecular analysis and to interpret next generation sequencing results. The proposed algorithm should be helpful for diagnosing ARCA in clinical practice.

  2. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  3. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  4. Air temperature estimation with MSG-SEVIRI data: Calibration and validation of the TVX algorithm for the Iberian Peninsula

    DEFF Research Database (Denmark)

    Nieto Solana, Hector; Sandholt, Inge; Aguado, Inmaculada

    2011-01-01

    Air temperature can be estimated from remote sensing by combining information in thermal infrared and optical wavelengths. The empirical TVX algorithm is based on an estimated linear relationship between observed Land Surface Temperature (LST) and a Spectral Vegetation Index (NDVI). Air temperature...... variation, land cover, landscape heterogeneity and topography. Results showed that the new calibrated NDVImax perform well, with a Mean Absolute Error ranging between 2.8 °C and 4 °C. In addition, vegetation-specific NDVImax improve the accuracy compared with a unique NDVImax....

  5. Development and validation of an algorithm for the study of sleep using a biometric shirt in young healthy adults.

    Science.gov (United States)

    Pion-Massicotte, Joëlle; Godbout, Roger; Savard, Pierre; Roy, Jean-François

    2018-02-23

    Portable polysomnography is often too complex and encumbering for recording sleep at home. We recorded sleep using a biometric shirt (electrocardiogram sensors, respiratory inductance plethysmography bands and an accelerometer) in 21 healthy young adults recorded in a sleep laboratory for two consecutive nights, together with standard polysomnography. Polysomnographic recordings were scored using standard methods. An algorithm was developed to classify the biometric shirt recordings into rapid eye movement sleep, non-rapid eye movement sleep and wake. The algorithm was based on breathing rate and heart rate variability, body movement, and included a correction for sleep onset and offset. The overall mean percentage of agreement between the two sets of recordings was 77.4%; when non-rapid eye movement and rapid eye movement sleep epochs were grouped together, it increased to 90.8%. The overall kappa coefficient was 0.53. Five of the seven sleep variables were significantly correlated. The findings of this pilot study indicate that this simple portable system could be used to estimate the general sleep pattern of young healthy adults. © 2018 European Sleep Research Society.

  6. Testing Pneumonia Vaccines in the Elderly: Determining a Case Definition for Pneumococcal Pneumonia in the Absence of a Gold Standard.

    Science.gov (United States)

    Jokinen, Jukka; Snellman, Marja; Palmu, Arto A; Saukkoriipi, Annika; Verlant, Vincent; Pascal, Thierry; Devaster, Jeanne-Marie; Hausdorff, William P; Kilpi, Terhi M

    2017-12-15

    Clinical assessments of vaccines to prevent pneumococcal (Pnc) community-acquired pneumonia (CAP) require sensitive and specific case definitions, but there is no gold standard diagnostic test. To develop a new case definition suitable for vaccine efficacy studies, we applied latent class analysis (LCA) to the results from seven diagnostic tests for Pnc etiology on clinical specimens from 323 elderly radiologically-confirmed pneumonia cases enrolled in The Finnish Community-Acquired Pneumonia Epidemiology study during 2005-2007. Compared to the conventional use of LCA, which is mainly to determine sensitivities and specificities of different tests, we instead used LCA as an appropriate instrument to predict the probability of Pnc etiology for each CAP case based on their test profiles, and utilized the predictions to minimize the sample size that would be needed for a vaccine efficacy trial. When compared to the conventional laboratory criteria of encapsulated Pnc in blood culture or in high-quality sputum culture or urine antigen positivity, our optimized case definition for PncCAP resulted in a trial sample size which was almost 20,000 subjects smaller. We believe that our novel application of LCA detailed here to determine a case definition for PncCAP could also be similarly applied to other diseases without a gold standard. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  7. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi

    OpenAIRE

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-01-01

    Background Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a s...

  8. Clinical malaria case definition and malaria attributable fraction in the highlands of western Kenya.

    Science.gov (United States)

    Afrane, Yaw A; Zhou, Guofa; Githeko, Andrew K; Yan, Guiyun

    2014-10-15

    In African highland areas where endemicity of malaria varies greatly according to altitude and topography, parasitaemia accompanied by fever may not be sufficient to define an episode of clinical malaria in endemic areas. To evaluate the effectiveness of malaria interventions, age-specific case definitions of clinical malaria needs to be determined. Cases of clinical malaria through active case surveillance were quantified in a highland area in Kenya and defined clinical malaria for different age groups. A cohort of over 1,800 participants from all age groups was selected randomly from over 350 houses in 10 villages stratified by topography and followed for two-and-a-half years. Participants were visited every two weeks and screened for clinical malaria, defined as an individual with malaria-related symptoms (fever [axillary temperature≥37.5°C], chills, severe malaise, headache or vomiting) at the time of examination or 1-2 days prior to the examination in the presence of a Plasmodium falciparum positive blood smear. Individuals in the same cohort were screened for asymptomatic malaria infection during the low and high malaria transmission seasons. Parasite densities and temperature were used to define clinical malaria by age in the population. The proportion of fevers attributable to malaria was calculated using logistic regression models. Incidence of clinical malaria was highest in valley bottom population (5.0% cases per 1,000 population per year) compared to mid-hill (2.2% cases per 1,000 population per year) and up-hill (1.1% cases per 1,000 population per year) populations. The optimum cut-off parasite densities through the determination of the sensitivity and specificity showed that in children less than five years of age, 500 parasites per μl of blood could be used to define the malaria attributable fever cases for this age group. In children between the ages of 5-14, a parasite density of 1,000 parasites per μl of blood could be used to define the

  9. Development of a case definition for clinical feline herpesvirus infection in cheetahs (Acinonyx jubatus) housed in zoos.

    Science.gov (United States)

    Witte, Carmel L; Lamberski, Nadine; Rideout, Bruce A; Fields, Victoria; Teare, Cyd Shields; Barrie, Michael; Haefele, Holly; Junge, Randall; Murray, Suzan; Hungerford, Laura L

    2013-09-01

    The identification of feline herpesvirus (FHV) infected cheetahs (Acinonyx jubatus) and characterization of shedding episodes is difficult due to nonspecific clinical signs and limitations of diagnostic tests. The goals of this study were to develop a case definition for clinical FHV and describe the distribution of signs. Medical records from six different zoologic institutions were reviewed to identify cheetahs with diagnostic test results confirming FHV. Published literature, expert opinion, and results of a multiple correspondence analysis (MCA) were used to develop a clinical case definition based on 69 episodes in FHV laboratory confirmed (LC) cheetahs. Four groups of signs were identified in the MCA: general ocular signs, serious ocular lesions, respiratory disease, and cutaneous lesions. Ocular disease occurred with respiratory signs alone, with skin lesions alone, and with both respiratory signs and skin lesions. Groups that did not occur together were respiratory signs and skin lesions. The resulting case definition included 1) LC cheetahs; and 2) clinically compatible (CC) cheetahs that exhibited a minimum of 7 day's duration of the clinical sign groupings identified in the MCA or the presence of corneal ulcers or keratitis that occurred alone or in concert with other ocular signs and skin lesions. Exclusion criteria were applied. Application of the case definition to the study population identified an additional 78 clinical episodes, which represented 58 CC cheetahs. In total, 28.8% (93/322) of the population was identified as LC or CC. The distribution of identified clinical signs was similar across LC and CC cheetahs. Corneal ulcers and/or keratitis, and skin lesions were more frequently reported in severe episodes; in mild episodes, there were significantly more cheetahs with ocular-only or respiratory-only disease. Our results provide a better understanding of the clinical presentation of FHV, while presenting a standardized case definition that can

  10. Cervicitis aetiology and case definition: a study in Australian women attending sexually transmitted infection clinics.

    Science.gov (United States)

    Lusk, M Josephine; Garden, Frances L; Rawlinson, William D; Naing, Zin W; Cumming, Robert G; Konecny, Pam

    2016-05-01

    Studies examining cervicitis aetiology and prevalence lack comparability due to varying criteria for cervicitis. We aimed to outline cervicitis associations and suggest a best case definition. A cross-sectional study of 558 women at three sexually transmitted infection clinics in Sydney, Australia, 2006-2010, examined pathogen and behavioural associations of cervicitis using three cervicitis definitions: 'microscopy' (>30 pmnl/hpf (polymorphonuclear leucocytes per high-powered field on cervical Gram stain)), 'cervical discharge' (yellow and/or mucopurulent cervical discharge) or 'micro+cervical discharge' (combined 'microscopy' and 'cervical discharge'). Chlamydia trachomatis (CT), Mycoplasma genitalium (MG), Trichomonas vaginalis (TV) and Neisseria gonorrhoeae (NG) had the strongest associations with cervicitis definitions 'micro+cervical discharge': CT adjusted prevalence ratio (APR)=2.13 (95% CI 1.38 to 3.30) p=0.0006, MG APR=2.21 (1.33 to 3.69) p=0.002, TV APR=2.37 (1.44 to 3.90) p=0.0007 NG PR=4.42 (3.79 to 5.15) pdefinitions with best clinical utility and pathogen prediction were 'cervical discharge' and 'micro+cervical discharge'. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. Clinical Criteria Versus a Possible Research Case Definition in Chronic Fatigue Syndrome/Myalgic Encephalomyelitis.

    Science.gov (United States)

    Jason, Leonard A; McManimen, Stephanie; Sunnquist, Madison; Newton, Julia L; Strand, Elin Bolle

    2017-01-01

    The Institute of Medicine (IOM) recently developed clinical criteria for what had been known as chronic fatigue syndrome (CFS). Given the broad nature of the clinical IOM criteria, there is a need for a research definition that would select a more homogenous and impaired group of patients than the IOM clinical criteria. At the present time, it is unclear what will serve as the research definition. The current study focused on a research definition which selected homebound individuals who met the four IOM criteria, excluding medical and psychiatric co-morbidities. Our research criteria were compared to those participants meeting the IOM criteria. Those not meeting either of these criteria sets were placed in a separate group defined by 6 or more months of fatigue. Data analyzed were from the DePaul Symptom Questionnaire and the SF-36. Due to unequal sample sizes and variances, Welch's F tests and Games-Howell post hoc tests were conducted. Using a large database of over 1,000 patients from several countries, we found that those meeting a more restrictive research definition were even more impaired and more symptomatic than those meeting criteria for the other two groups. Deciding on a particular research case definition would allow researchers to select more comparable patient samples across settings, and this would represent one of the most significant methodologic advances for this field of study.

  12. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    International Nuclear Information System (INIS)

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-01-01

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm 2 ). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm 2 . Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm 2 ) only 92% of the data meet the criteria. Total scatter factors show a good agreement ( 2 ) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm 2 . Special care must be taken for smaller fields.

  13. Validation of ozone profile retrievals derived from the OMPS LP version 2.5 algorithm against correlative satellite measurements

    Science.gov (United States)

    Kramarova, Natalya A.; Bhartia, Pawan K.; Jaross, Glen; Moy, Leslie; Xu, Philippe; Chen, Zhong; DeLand, Matthew; Froidevaux, Lucien; Livesey, Nathaniel; Degenstein, Douglas; Bourassa, Adam; Walker, Kaley A.; Sheese, Patrick

    2018-05-01

    The Limb Profiler (LP) is a part of the Ozone Mapping and Profiler Suite launched on board of the Suomi NPP satellite in October 2011. The LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km. These measurements of scattered solar radiances allow for the retrieval of ozone profiles from cloud tops up to 55 km. The LP started operational observations in April 2012. In this study we evaluate more than 5.5 years of ozone profile measurements from the OMPS LP processed with the new NASA GSFC version 2.5 retrieval algorithm. We provide a brief description of the key changes that had been implemented in this new algorithm, including a pointing correction, new cloud height detection, explicit aerosol correction and a reduction of the number of wavelengths used in the retrievals. The OMPS LP ozone retrievals have been compared with independent satellite profile measurements obtained from the Aura Microwave Limb Sounder (MLS), Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) and Odin Optical Spectrograph and InfraRed Imaging System (OSIRIS). We document observed biases and seasonal differences and evaluate the stability of the version 2.5 ozone record over 5.5 years. Our analysis indicates that the mean differences between LP and correlative measurements are well within required ±10 % between 18 and 42 km. In the upper stratosphere and lower mesosphere (> 43 km) LP tends to have a negative bias. We find larger biases in the lower stratosphere and upper troposphere, but LP ozone retrievals have significantly improved in version 2.5 compared to version 2 due to the implemented aerosol correction. In the northern high latitudes we observe larger biases between 20 and 32 km due to the remaining thermal sensitivity issue. Our analysis shows that LP ozone retrievals agree well with the correlative satellite observations in characterizing vertical, spatial and temporal

  14. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    Science.gov (United States)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  15. Validation of the Revised Stressful Life Event Questionnaire Using a Hybrid Model of Genetic Algorithm and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Rasoul Sali

    2013-01-01

    Full Text Available Objectives. Stressors have a serious role in precipitating mental and somatic disorders and are an interesting subject for many clinical and community-based studies. Hence, the proper and accurate measurement of them is very important. We revised the stressful life event (SLE questionnaire by adding weights to the events in order to measure and determine a cut point. Methods. A total of 4569 adults aged between 18 and 85 years completed the SLE questionnaire and the general health questionnaire-12 (GHQ-12. A hybrid model of genetic algorithm (GA and artificial neural networks (ANNs was applied to extract the relation between the stressful life events (evaluated by a 6-point Likert scale and the GHQ score as a response variable. In this model, GA is used in order to set some parameter of ANN for achieving more accurate results. Results. For each stressful life event, the number is defined as weight. Among all stressful life events, death of parents, spouse, or siblings is the most important and impactful stressor in the studied population. Sensitivity of 83% and specificity of 81% were obtained for the cut point 100. Conclusion. The SLE-revised (SLE-R questionnaire despite simplicity is a high-performance screening tool for investigating the stress level of life events and its management in both community and primary care settings. The SLE-R questionnaire is user-friendly and easy to be self-administered. This questionnaire allows the individuals to be aware of their own health status.

  16. Genome-Wide Association Study of a Validated Case Definition of Gulf War Illness in a Population-Representative Sample

    Science.gov (United States)

    2013-09-01

    neuropsychological [10, 11] , neurophysiological [10, 12] , auto- nomic [13] , brain imaging [7, 14–16] and functional status [17] measures, with...Craig Hill and Robert E. Mason. E. William Byrd Jr., Michael E. Murray, Helen Koo and a team of RTI staff contributed to the design of reproductive and...152: 992–1002. 19 Poblete PP, Araneta MRG, Sato PA, Hilio- poulos KM, Kamens DR, Morn CB, Zau AC, Gray GC: National study on reproductive outcomes

  17. Accuracy of both virtual and printed 3-dimensional models for volumetric measurement of alveolar clefts before grafting with alveolar bone compared with a validated algorithm: a preliminary investigation.

    Science.gov (United States)

    Kasaven, C P; McIntyre, G T; Mossey, P A

    2017-01-01

    Our objective was to assess the accuracy of virtual and printed 3-dimensional models derived from cone-beam computed tomographic (CT) scans to measure the volume of alveolar clefts before bone grafting. Fifteen subjects with unilateral cleft lip and palate had i-CAT cone-beam CT scans recorded at 0.2mm voxel and sectioned transversely into slices 0.2mm thick using i-CAT Vision. Volumes of alveolar clefts were calculated using first a validated algorithm; secondly, commercially-available virtual 3-dimensional model software; and finally 3-dimensional printed models, which were scanned with microCT and analysed using 3-dimensional software. For inter-observer reliability, a two-way mixed model intraclass correlation coefficient (ICC) was used to evaluate the reproducibility of identification of the cranial and caudal limits of the clefts among three observers. We used a Friedman test to assess the significance of differences among the methods, and probabilities of less than 0.05 were accepted as significant. Inter-observer reliability was almost perfect (ICC=0.987). There were no significant differences among the three methods. Virtual and printed 3-dimensional models were as precise as the validated computer algorithm in the calculation of volumes of the alveolar cleft before bone grafting, but virtual 3-dimensional models were the most accurate with the smallest 95% CI and, subject to further investigation, could be a useful adjunct in clinical practice. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  18. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    Science.gov (United States)

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  19. GOCI Yonsei aerosol retrieval version 2 products: an improved algorithm and error analysis with uncertainty estimation from 5-year validation over East Asia

    Science.gov (United States)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Holben, Brent; Eck, Thomas F.; Li, Zhengqiang; Song, Chul H.

    2018-01-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed to retrieve hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD had accuracy comparable to ground-based and other satellite-based observations but still had errors because of uncertainties in surface reflectance and simple cloud masking. In addition, near-real-time (NRT) processing was not possible because a monthly database for each year encompassing the day of retrieval was required for the determination of surface reflectance. This study describes the improved GOCI YAER algorithm version 2 (V2) for NRT processing with improved accuracy based on updates to the cloud-masking and surface-reflectance calculations using a multi-year Rayleigh-corrected reflectance and wind speed database, and inversion channels for surface conditions. The improved GOCI AOD τG is closer to that of the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD than was the case for AOD from the YAER V1 algorithm. The V2 τG has a lower median bias and higher ratio within the MODIS expected error range (0.60 for land and 0.71 for ocean) compared with V1 (0.49 for land and 0.62 for ocean) in a validation test against Aerosol Robotic Network (AERONET) AOD τA from 2011 to 2016. A validation using the Sun-Sky Radiometer Observation Network (SONET) over China shows similar results. The bias of error (τG - τA) is within -0.1 and 0.1, and it is a function of AERONET AOD and Ångström exponent (AE), scattering angle, normalized difference vegetation index (NDVI), cloud fraction and homogeneity of retrieved AOD, and observation time, month, and year. In addition, the diagnostic and prognostic expected error (PEE) of τG are estimated. The estimated PEE of GOCI V2 AOD is well correlated with the actual error over East Asia, and the GOCI V2 AOD over South

  20. GOCI Yonsei aerosol retrieval version 2 aerosol products: improved algorithm description and error analysis with uncertainty estimation from 5-year validation over East Asia

    Science.gov (United States)

    Choi, M.; Kim, J.; Lee, J.; KIM, M.; Park, Y. J.; Holben, B. N.; Eck, T. F.; Li, Z.; Song, C. H.

    2017-12-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed for retrieving hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD showed comparable accuracy compared to ground-based and other satellite-based observations, but still had errors due to uncertainties in surface reflectance and simple cloud masking. Also, it was not capable of near-real-time (NRT) processing because it required a monthly database of each year encompassing the day of retrieval for the determination of surface reflectance. This study describes the improvement of GOCI YAER algorithm to the version 2 (V2) for NRT processing with improved accuracy from the modification of cloud masking, surface reflectance determination using multi-year Rayleigh corrected reflectance and wind speed database, and inversion channels per surface conditions. Therefore, the improved GOCI AOD ( ) is similar with those of Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD compared to V1 of the YAER algorithm. The shows reduced median bias and increased ratio within range (i.e. absolute expected error range of MODIS AOD) compared to V1 in the validation results using Aerosol Robotic Network (AERONET) AOD ( ) from 2011 to 2016. The validation using the Sun-Sky Radiometer Observation Network (SONET) over China also shows similar results. The bias of error ( is within -0.1 and 0.1 range as a function of AERONET AOD and AE, scattering angle, NDVI, cloud fraction and homogeneity of retrieved AOD, observation time, month, and year. Also, the diagnostic and prognostic expected error (DEE and PEE, respectively) of are estimated. The estimated multiple PEE of GOCI V2 AOD is well matched with actual error over East Asia, and the GOCI V2 AOD over Korea shows higher ratio within PEE compared to over China and Japan. Hourly AOD products based on the

  1. Should clinical case definitions of influenza in hospitalized older adults include fever?

    Science.gov (United States)

    Falsey, Ann R; Baran, Andrea; Walsh, Edward E

    2015-08-01

    Influenza is a major cause of morbidity and mortality in elderly persons. Fever is included in all standard definitions of influenza-like illness (ILI), yet older patients may have diminished febrile response to infection. Therefore, we examined the utility of various thresholds to define fever for case definitions of influenza in persons ≥ 65 years of age. Data from two prospective surveillance studies for respiratory viral infection in adults hospitalized with acute cardiopulmonary illnesses with or without fever were examined. The highest temperature reported prior to admission or measured during the first 24 h after admission was recorded. The diagnosis of influenza was made by a combination of viral culture, reverse-transcription polymerase chain reaction, antigen testing, and serology. A total of 2410 subjects (66% ≥ 65 years of age) were enrolled; 281 had influenza (261 influenza A, 19 influenza B, and one mixed influenza A and B). The commonly used definition of ILI (fever ≥ 37·8°C and cough) resulted in 57% sensitivity and 71% specificity in older adults. Receiver operating characteristic curves examining the various temperature thresholds combined with cough and/or sore throat showed the optimal balance between sensitivity and specificity to be 37·9°C (AUC 0·71) and 37·3°C (AUC 0·66), in younger and older persons, respectively. Clinical decision rules using the presence of cough and fever may be helpful when screening for influenza or empiric antiviral treatment when rapid influenza testing is not available; however, lower fever thresholds may be considered for elderly subjects. © 2015 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  2. Comparison of different criteria for periodontitis case definition in head and neck cancer individuals.

    Science.gov (United States)

    Bueno, Audrey Cristina; Ferreira, Raquel Conceição; Cota, Luis Otávio Miranda; Silva, Guilherme Carvalho; Magalhães, Cláudia Silami; Moreira, Allyson Nogueira

    2015-09-01

    Different periodontitis case definitions have been used in clinical research and epidemiology. The aim of this study was to determine more accurate criterion for the definition of mild and moderate periodontitis case to be applied to head and neck cancer individuals before radiotherapy. The frequency of periodontitis in a sample of 84 individuals was determined according to different diagnostic criteria: (1) Lopez et al. (2002);(2) Hujoel et al. (2006); (3) Beck et al. (1990); (4) Machtei et al. (1992); (5) Tonetti and Claffey (2005); (6) and Page and Eke (2007). All diagnosis were based on the clinical parameters obtained by a single calibrated examiner (Kw = 0.71). The individuals were evaluated before radiotherapy. They received oral hygiene instructions, and the cases diagnosed with periodontitis (Page and Eke 2007) were treated. The gold standard was the definition 6, and the others were compared by means of agreement, sensitivity (SS), specificity (SP), and the area under ROC curve. The kappa test evaluated the agreement between definitions. The frequency of periodontitis at baseline was 53.6 % (definition 1), 81.0 % (definition 2), 40.5 % (definition 3), 26.2 % (definition 4), 13.1 % (definition 5), and 70.2 % (definition 6). The kappa test showed a moderate agreement between definitions 6 and 2 (59.0 %) and definitions 6 and 1 (56.0 %). The criterion with higher SS (0.92) and SP (0.73) was definition 1. Definition 1 was the most accurate criterion to case periodontitis definition to be applied to head and neck cancer individuals.

  3. A multi-sensor burned area algorithm for crop residue burning in northwestern India: validation and sources of error

    Science.gov (United States)

    Liu, T.; Marlier, M. E.; Karambelas, A. N.; Jain, M.; DeFries, R. S.

    2017-12-01

    A leading source of outdoor emissions in northwestern India comes from crop residue burning after the annual monsoon (kharif) and winter (rabi) crop harvests. Agricultural burned area, from which agricultural fire emissions are often derived, can be poorly quantified due to the mismatch between moderate-resolution satellite sensors and the relatively small size and short burn period of the fires. Many previous studies use the Global Fire Emissions Database (GFED), which is based on the Moderate Resolution Imaging Spectroradiometer (MODIS) burned area product MCD64A1, as an outdoor fires emissions dataset. Correction factors with MODIS active fire detections have previously attempted to account for small fires. We present a new burned area classification algorithm that leverages more frequent MODIS observations (500 m x 500 m) with higher spatial resolution Landsat (30 m x 30 m) observations. Our approach is based on two-tailed Normalized Burn Ratio (NBR) thresholds, abbreviated as ModL2T NBR, and results in an estimated 104 ± 55% higher burned area than GFEDv4.1s (version 4, MCD64A1 + small fires correction) in northwestern India during the 2003-2014 winter (October to November) burning seasons. Regional transport of winter fire emissions affect approximately 63 million people downwind. The general increase in burned area (+37% from 2003-2007 to 2008-2014) over the study period also correlates with increased mechanization (+58% in combine harvester usage from 2001-2002 to 2011-2012). Further, we find strong correlation between ModL2T NBR-derived burned area and results of an independent survey (r = 0.68) and previous studies (r = 0.92). Sources of error arise from small median landholding sizes (1-3 ha), heterogeneous spatial distribution of two dominant burning practices (partial and whole field), coarse spatio-temporal satellite resolution, cloud and haze cover, and limited Landsat scene availability. The burned area estimates of this study can be used to build

  4. SU-F-T-155: Validation of a Commercial Monte Carlo Dose Calculation Algorithm for Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Saini, J; Wong, T [SCCA Proton Therapy Center, Seattle, WA (United States); St James, S; Stewart, R; Bloch, C [University of Washington, Seattle, WA (United States); Traneus, E [Raysearch Laboratories AB, Stockholm. (Sweden)

    2016-06-15

    Purpose: Compare proton pencil beam scanning dose measurements to GATE/GEANT4 (GMC) and RayStation™ Monte Carlo (RMC) simulations. Methods: Proton pencil beam models of the IBA gantry at the Seattle Proton Therapy Center were developed in the GMC code system and a research build of the RMC. For RMC, a preliminary beam model that does not account for upstream halo was used. Depth dose and lateral profiles are compared for the RMC, GMC and a RayStation™ pencil beam dose (RPB) model for three spread out Bragg peaks (SOBPs) in homogenous water phantom. SOBP comparisons were also made among the three models for a phantom with a (i) 2 cm bone and a (ii) 0.5 cm titanium insert. Results: Measurements and GMC estimates of R80 range agree to within 1 mm, and the mean point-to-point dose difference is within 1.2% for all integrated depth dose (IDD) profiles. The dose differences at the peak are 1 to 2%. All of the simulated spot sigmas are within 0.15 mm of the measured values. For the three SOBPs considered, the maximum R80 deviation from measurement for GMC was −0.35 mm, RMC 0.5 mm, and RPB −0.1 mm. The minimum gamma pass using the 3%/3mm criterion for all the profiles was 94%. The dose comparison for heterogeneous inserts in low dose gradient regions showed dose differences greater than 10% at the distal edge of interface between RPB and GMC. The RMC showed improvement and agreed with GMC to within 7%. Conclusion: The RPB dosimetry show clinically significant differences (> 10%) from GMC and RMC estimates. The RMC algorithm is superior to the RPB dosimetry in heterogeneous media. We suspect modelling of the beam’s halo may be responsible for a portion of the remaining discrepancy and that RayStation will reduce this discrepancy as they finalize the release. Erik Traneus is employed as a Research Scientist at RaySearch Laboratories. The research build of the RayStation TPS used in the study was made available to the SCCA free of charge. RaySearch did not provide

  5. Clinical and epidemiological features of typhoid fever in Pemba, Zanzibar: assessment of the performance of the WHO case definitions.

    Science.gov (United States)

    Thriemer, Kamala; Ley, Benedikt; Ley, Benedikt B; Ame, Shaali S; Deen, Jaqueline L; Pak, Gi Deok; Chang, Na Yoon; Hashim, Ramadhan; Schmied, Wolfgang Hellmut; Busch, Clara Jana-Lui; Nixon, Shanette; Morrissey, Anne; Puri, Mahesh K; Ochiai, R Leon; Wierzba, Thomas; Clemens, John D; Ali, Mohammad; Jiddawi, Mohammad S; von Seidlein, Lorenz; Ali, Said M

    2012-01-01

    The gold standard for diagnosis of typhoid fever is blood culture (BC). Because blood culture is often not available in impoverished settings it would be helpful to have alternative diagnostic approaches. We therefore investigated the usefulness of clinical signs, WHO case definition and Widal test for the diagnosis of typhoid fever. Participants with a body temperature ≥37.5°C or a history of fever were enrolled over 17 to 22 months in three hospitals on Pemba Island, Tanzania. Clinical signs and symptoms of participants upon presentation as well as blood and serum for BC and Widal testing were collected. Clinical signs and symptoms of typhoid fever cases were compared to other cases of invasive bacterial diseases and BC negative participants. The relationship of typhoid fever cases with rainfall, temperature, and religious festivals was explored. The performance of the WHO case definitions for suspected and probable typhoid fever and a local cut off titre for the Widal test was assessed. 79 of 2209 participants had invasive bacterial disease. 46 isolates were identified as typhoid fever. Apart from a longer duration of fever prior to admission clinical signs and symptoms were not significantly different among patients with typhoid fever than from other febrile patients. We did not detect any significant seasonal patterns nor correlation with rainfall or festivals. The sensitivity and specificity of the WHO case definition for suspected and probable typhoid fever were 82.6% and 41.3% and 36.3 and 99.7% respectively. Sensitivity and specificity of the Widal test was 47.8% and 99.4 both forfor O-agglutinin and H- agglutinin at a cut-off titre of 1:80. Typhoid fever prevalence rates on Pemba are high and its clinical signs and symptoms are non-specific. The sensitivity of the Widal test is low and the WHO case definition performed better than the Widal test.

  6. From patient care to research: a validation study examining the factors contributing to data quality in a primary care electronic medical record database.

    Science.gov (United States)

    Coleman, Nathan; Halas, Gayle; Peeler, William; Casaclang, Natalie; Williamson, Tyler; Katz, Alan

    2015-02-05

    Electronic Medical Records (EMRs) are increasingly used in the provision of primary care and have been compiled into databases which can be utilized for surveillance, research and informing practice. The primary purpose of these records is for the provision of individual patient care; validation and examination of underlying limitations is crucial for use for research and data quality improvement. This study examines and describes the validity of chronic disease case definition algorithms and factors affecting data quality in a primary care EMR database. A retrospective chart audit of an age stratified random sample was used to validate and examine diagnostic algorithms applied to EMR data from the Manitoba Primary Care Research Network (MaPCReN), part of the Canadian Primary Care Sentinel Surveillance Network (CPCSSN). The presence of diabetes, hypertension, depression, osteoarthritis and chronic obstructive pulmonary disease (COPD) was determined by review of the medical record and compared to algorithm identified cases to identify discrepancies and describe the underlying contributing factors. The algorithm for diabetes had high sensitivity, specificity and positive predictive value (PPV) with all scores being over 90%. Specificities of the algorithms were greater than 90% for all conditions except for hypertension at 79.2%. The largest deficits in algorithm performance included poor PPV for COPD at 36.7% and limited sensitivity for COPD, depression and osteoarthritis at 72.0%, 73.3% and 63.2% respectively. Main sources of discrepancy included missing coding, alternative coding, inappropriate diagnosis detection based on medications used for alternate indications, inappropriate exclusion due to comorbidity and loss of data. Comparison to medical chart review shows that at MaPCReN the CPCSSN case finding algorithms are valid with a few limitations. This study provides the basis for the validated data to be utilized for research and informs users of its

  7. Identifying Psoriasis and Psoriatic Arthritis Patients in Retrospective Databases When Diagnosis Codes Are Not Available: A Validation Study Comparing Medication/Prescriber Visit-Based Algorithms with Diagnosis Codes.

    Science.gov (United States)

    Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M

    2018-01-01

    Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Concordance between European and US case definitions of healthcare-associated infections

    Science.gov (United States)

    2012-01-01

    Background Surveillance of healthcare-associated infections (HAI) is a valuable measure to decrease infection rates. Across Europe, inter-country comparisons of HAI rates seem limited because some countries use US definitions from the US Centers for Disease Control and Prevention (CDC/NHSN) while other countries use European definitions from the Hospitals in Europe Link for Infection Control through Surveillance (HELICS/IPSE) project. In this study, we analyzed the concordance between US and European definitions of HAI. Methods An international working group of experts from seven European countries was set up to identify differences between US and European definitions and then conduct surveillance using both sets of definitions during a three-month period (March 1st -May 31st, 2010). Concordance between case definitions was estimated with Cohen’s kappa statistic (κ). Results Differences in HAI definitions were found for bloodstream infection (BSI), pneumonia (PN), urinary tract infection (UTI) and the two key terms “intensive care unit (ICU)-acquired infection” and “mechanical ventilation”. Concordance was analyzed for these definitions and key terms with the exception of UTI. Surveillance was performed in 47 ICUs and 6,506 patients were assessed. One hundred and eighty PN and 123 BSI cases were identified. When all PN cases were considered, concordance for PN was κ = 0.99 [CI 95%: 0.98-1.00]. When PN cases were divided into subgroups, concordance was κ = 0.90 (CI 95%: 0.86-0.94) for clinically defined PN and κ = 0.72 (CI 95%: 0.63-0.82) for microbiologically defined PN. Concordance for BSI was κ = 0.73 [CI 95%: 0.66-0.80]. However, BSI cases secondary to another infection site (42% of all BSI cases) are excluded when using US definitions and concordance for BSI was κ = 1.00 when only primary BSI cases, i.e. Europe-defined BSI with ”catheter” or “unknown” origin and US-defined laboratory-confirmed BSI (LCBI), were

  9. Concordance between European and US case definitions of healthcare-associated infections

    Directory of Open Access Journals (Sweden)

    Hansen Sonja

    2012-08-01

    Full Text Available Abstract Background Surveillance of healthcare-associated infections (HAI is a valuable measure to decrease infection rates. Across Europe, inter-country comparisons of HAI rates seem limited because some countries use US definitions from the US Centers for Disease Control and Prevention (CDC/NHSN while other countries use European definitions from the Hospitals in Europe Link for Infection Control through Surveillance (HELICS/IPSE project. In this study, we analyzed the concordance between US and European definitions of HAI. Methods An international working group of experts from seven European countries was set up to identify differences between US and European definitions and then conduct surveillance using both sets of definitions during a three-month period (March 1st -May 31st, 2010. Concordance between case definitions was estimated with Cohen’s kappa statistic (κ. Results Differences in HAI definitions were found for bloodstream infection (BSI, pneumonia (PN, urinary tract infection (UTI and the two key terms “intensive care unit (ICU-acquired infection” and “mechanical ventilation”. Concordance was analyzed for these definitions and key terms with the exception of UTI. Surveillance was performed in 47 ICUs and 6,506 patients were assessed. One hundred and eighty PN and 123 BSI cases were identified. When all PN cases were considered, concordance for PN was κ = 0.99 [CI 95%: 0.98-1.00]. When PN cases were divided into subgroups, concordance was κ = 0.90 (CI 95%: 0.86-0.94 for clinically defined PN and κ = 0.72 (CI 95%: 0.63-0.82 for microbiologically defined PN. Concordance for BSI was κ = 0.73 [CI 95%: 0.66-0.80]. However, BSI cases secondary to another infection site (42% of all BSI cases are excluded when using US definitions and concordance for BSI was κ = 1.00 when only primary BSI cases, i.e. Europe-defined BSI with ”catheter” or “unknown” origin and US-defined laboratory-confirmed BSI

  10. Validation of the Gate simulation platform in single photon emission computed tomography and application to the development of a complete 3-dimensional reconstruction algorithm

    International Nuclear Information System (INIS)

    Lazaro, D.

    2003-10-01

    Monte Carlo simulations are currently considered in nuclear medical imaging as a powerful tool to design and optimize detection systems, and also to assess reconstruction algorithms and correction methods for degrading physical effects. Among the many simulators available, none of them is considered as a standard in nuclear medical imaging: this fact has motivated the development of a new generic Monte Carlo simulation platform (GATE), based on GEANT4 and dedicated to SPECT/PET (single photo emission computed tomography / positron emission tomography) applications. We participated during this thesis to the development of the GATE platform within an international collaboration. GATE was validated in SPECT by modeling two gamma cameras characterized by a different geometry, one dedicated to small animal imaging and the other used in a clinical context (Philips AXIS), and by comparing the results obtained with GATE simulations with experimental data. The simulation results reproduce accurately the measured performances of both gamma cameras. The GATE platform was then used to develop a new 3-dimensional reconstruction method: F3DMC (fully 3-dimension Monte-Carlo) which consists in computing with Monte Carlo simulation the transition matrix used in an iterative reconstruction algorithm (in this case, ML-EM), including within the transition matrix the main physical effects degrading the image formation process. The results obtained with the F3DMC method were compared to the results obtained with three other more conventional methods (FBP, MLEM, MLEMC) for different phantoms. The results of this study show that F3DMC allows to improve the reconstruction efficiency, the spatial resolution and the signal to noise ratio with a satisfactory quantification of the images. These results should be confirmed by performing clinical experiments and open the door to a unified reconstruction method, which could be applied in SPECT but also in PET. (author)

  11. Can the same edge-detection algorithm be applied to on-line and off-line analysis systems? Validation of a new cinefilm-based geometric coronary measurement software

    NARCIS (Netherlands)

    J. Haase (Jürgen); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); M.M.J.M. van der Linden (Mark); D.P. Foley (David); W.J. van der Giessen (Wim)

    1993-01-01

    textabstractIn the Cardiovascular Measurement System (CMS) the edge-detection algorithm, which was primarily designed for the Philips digital cardiac imaging system (DCI), is applied to cinefilms. Comparative validation of CMS and DCI was performed in vitro and in vivo with intracoronary insertion

  12. A new automatic algorithm for quantification of myocardial infarction imaged by late gadolinium enhancement cardiovascular magnetic resonance: experimental validation and comparison to expert delineations in multi-center, multi-vendor patient data.

    Science.gov (United States)

    Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar

    2016-05-04

    Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in

  13. Validation of asthma recording in electronic health records: protocol for a systematic review.

    Science.gov (United States)

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-05-29

    Asthma is a common, heterogeneous disease with significant morbidity and mortality worldwide. It can be difficult to define in epidemiological studies using electronic health records as the diagnosis is based on non-specific respiratory symptoms and spirometry, neither of which are routinely registered. Electronic health records can nonetheless be valuable to study the epidemiology, management, healthcare use and control of asthma. For health databases to be useful sources of information, asthma diagnoses should ideally be validated. The primary objectives are to provide an overview of the methods used to validate asthma diagnoses in electronic health records and summarise the results of the validation studies. EMBASE and MEDLINE will be systematically searched for appropriate search terms. The searches will cover all studies in these databases up to October 2016 with no start date and will yield studies that have validated algorithms or codes for the diagnosis of asthma in electronic health records. At least one test validation measure (sensitivity, specificity, positive predictive value, negative predictive value or other) is necessary for inclusion. In addition, we require the validated algorithms to be compared with an external golden standard, such as a manual review, a questionnaire or an independent second database. We will summarise key data including author, year of publication, country, time period, date, data source, population, case characteristics, clinical events, algorithms, gold standard and validation statistics in a uniform table. This study is a synthesis of previously published studies and, therefore, no ethical approval is required. The results will be submitted to a peer-reviewed journal for publication. Results from this systematic review can be used to study outcome research on asthma and can be used to identify case definitions for asthma. CRD42016041798. © Article author(s) (or their employer(s) unless otherwise stated in the text of the

  14. Design and Validation of a Control Algorithm for a SAE J2954-Compliant Wireless Charger to Guarantee the Operational Electrical Constraints

    Directory of Open Access Journals (Sweden)

    José Manuel González-González

    2018-03-01

    Full Text Available Wireless power transfer is foreseen as a suitable technology to provide charge without cables to electric vehicles. This technology is mainly supported by two coupled coils, whose mutual inductance is sensitive to their relative positions. Variations on this coefficient greatly impact the electrical magnitudes of the wireless charger. The aim of this paper is the design and validation of a control algorithm for an Society of Automotive Engineers (SAE J2954-compliant wireless charger to guarantee some operational and electrical constraints. These constraints are designed to prevent some components from being damaged by excessive voltage or current. This paper also presents the details for the design and implementation of the bidirectional charger topology in which the proposed controller is incorporated. The controller is installed on the primary and on the secondary side, given that wireless communication is necessary with the other side. The input data of the controller helps it decide about the phase shift required to apply in the DC/AC converter. The experimental results demonstrate how the system regulates the output voltage of the DC/AC converter so that some electrical magnitudes do not exceed predefined thresholds. The regulation, which has been tested when coil misalignments occur, is proven to be effective.

  15. Determination of the optimal case definition for the diagnosis of end-stage renal disease from administrative claims data in Manitoba, Canada.

    Science.gov (United States)

    Komenda, Paul; Yu, Nancy; Leung, Stella; Bernstein, Keevin; Blanchard, James; Sood, Manish; Rigatto, Claudio; Tangri, Navdeep

    2015-01-01

    End-stage renal disease (ESRD) is a major public health problem with increasing prevalence and costs. An understanding of the long-term trends in dialysis rates and outcomes can help inform health policy. We determined the optimal case definition for the diagnosis of ESRD using administrative claims data in the province of Manitoba over a 7-year period. We determined the sensitivity, specificity, predictive value and overall accuracy of 4 administrative case definitions for the diagnosis of ESRD requiring chronic dialysis over different time horizons from Jan. 1, 2004, to Mar. 31, 2011. The Manitoba Renal Program Database served as the gold standard for confirming dialysis status. During the study period, 2562 patients were registered as recipients of chronic dialysis in the Manitoba Renal Program Database. Over a 1-year period (2010), the optimal case definition was any 2 claims for outpatient dialysis, and it was 74.6% sensitive (95% confidence interval [CI] 72.3%-76.9%) and 94.4% specific (95% CI 93.6%-95.2%) for the diagnosis of ESRD. In contrast, a case definition of at least 2 claims for dialysis treatment more than 90 days apart was 64.8% sensitive (95% CI 62.2%-67.3%) and 97.1% specific (95% CI 96.5%-97.7%). Extending the period to 5 years greatly improved sensitivity for all case definitions, with minimal change to specificity; for example, for the optimal case definition of any 2 claims for dialysis treatment, sensitivity increased to 86.0% (95% CI 84.7%-87.4%) at 5 years. Accurate case definitions for the diagnosis of ESRD requiring dialysis can be derived from administrative claims data. The optimal definition required any 2 claims for outpatient dialysis. Extending the claims period to 5 years greatly improved sensitivity with minimal effects on specificity for all case definitions.

  16. Description of children identified as suffering from MAM in Bangladesh: Varying results based on case definitions

    International Nuclear Information System (INIS)

    Waid, Jillian

    2014-01-01

    Full text: Background: There is a wide discrepancy between the proportion of children classified as acutely malnourished when MUAC criteria are used compared to weight for height. This has greatly complicated setting targets for the coverage of SAM and MAM programs in Bangladesh. This difference is much larger for children identified with MAM than for those with SAM, largely because identification as MAM can overlap both with SAM and with children not identified as acutely malnourished. Objective: To review existing data sets in order to determine the relationship between MUAC and other anthropometric measures, helping to provide a better understanding of the implications of different admission criteria to therapeutic and supplementary feeding programs. Methodology: This study uses data collected through national nutritional surveillance projects over multiple seasons in Bangladesh. For the years 1990 to 2006, sub-samples of data from the Nutritional Surveillance Project were pulled from areas of the country that remained constant over a set period. Data from 2010 to 2012 was pulled from the Food Security and Nutrition Surveillance Project. Case definition: Cases of moderate acute malnutrition were identified using MUAC- for-age z-scores (-3>z-score>-2), MUAC cut-offs (115mm>MUAC>125mm), and weight-for-height z-scores (-3>z-score>-2). Results: In all years more than 50% of all children identified as moderately malnourished were classified as such by only one measure (1990 selected sub-districts: 52%, 2012 national sample: 69%) In 1990 a higher proportion of children were categorized as moderately malnourished based on MUAC-for-age z-scores than by weight for height z-scores, but since 2000 the opposite has been true. This change is closely tied to the increasing height of children sampled, due to the declining rates of stunting in the country. After controlling for age and weight-for-height z-scores, an increase in height of one cm was associated with an increase

  17. An evaluation of modified case definitions for the detection of dengue hemorrhagic fever. Puerto Rico Association of Epidemiologists.

    Science.gov (United States)

    Rigau-Pérez, J G; Bonilla, G L

    1999-12-01

    The case definition for dengue hemorrhagic fever (DHF) requires fever, platelets plasma leakage evidenced by hemoconcentration > or = 20%, pleural or abdominal effusions, hypoproteinemia or hypoalbuminemia. We evaluated the specificity and yield of modified DHF case definitions and the recently proposed World Health Organization criteria for a provisional diagnosis of DHF, using a data base of laboratory-positive and laboratory-negative reports of hospitalizations for suspected dengue in Puerto Rico, 1994 to 1996. By design, all modifications had 100% sensitivity. More liberal criteria for plasma leakage were examined: 1) adding as evidence a single hematocrit > or = 50% (specificity 97.4%); 2) accepting hemoconcentration > or = 10% (specificity 90.1%); and 3) accepting either hematocrit > or = 50% or hemoconcentration > or = 10% (specificity 88.8%). The new DHF cases identified by these definitions (and percent laboratory positive) were 25 (100.0%), 95 (90.5%), and 107 (91.6%), respectively. In contrast, the provisional diagnosis of DHF (fever and hemorrhage, and one or more of platelets or = 20%, or at least a rising hematocrit [redefined quantitatively as a 5% or greater relative change]) showed a specificity of 66.8%, and identified 318 new DHF cases, of which 282 (88.7%) were laboratory-positive. Very small changes in the criteria may result in a large number of new cases. The modification that accepted either hematocrit > or = 50% or hemoconcentration > or = 10% had acceptable specificity, while doubling the detection of DHF-compatible, laboratory-positive severe cases, but "provisional diagnosis" showed even lower specificity, and may produce inflated DHF incidence figures. Modified case definitions should be prospectively evaluated with patients in a health-care facility before they are recommended for widespread use.

  18. PathfinderTURB: an automatic boundary layer algorithm. Development, validation and application to study the impact on in situ measurements at the Jungfraujoch

    Directory of Open Access Journals (Sweden)

    Y. Poltera

    2017-08-01

    Full Text Available We present the development of the PathfinderTURB algorithm for the analysis of ceilometer backscatter data and the real-time detection of the vertical structure of the planetary boundary layer. Two aerosol layer heights are retrieved by PathfinderTURB: the convective boundary layer (CBL and the continuous aerosol layer (CAL. PathfinderTURB combines the strengths of gradient- and variance-based methods and addresses the layer attribution problem by adopting a geodesic approach. The algorithm has been applied to 1 year of data measured by two ceilometers of type CHM15k, one operated at the Aerological Observatory of Payerne (491 m a.s.l. on the Swiss plateau and one at the Kleine Scheidegg (2061 m a.s.l. in the Swiss Alps. The retrieval of the CBL has been validated at Payerne using two reference methods: (1 manual detections of the CBL height performed by human experts using the ceilometer backscatter data; (2 values of CBL heights calculated using the Richardson's method from co-located radio sounding data. We found average biases as small as 27 m (53 m with respect to reference method 1 (method 2. Based on the excellent agreement between the two reference methods, PathfinderTURB has been applied to the ceilometer data at the mountainous site of the Kleine Scheidegg for the period September 2014 to November 2015. At this site, the CHM15k is operated in a tilted configuration at 71° zenith angle to probe the atmosphere next to the Sphinx Observatory (3580 m a.s.l. on the Jungfraujoch (JFJ. The analysis of the retrieved layers led to the following results: the CAL reaches the JFJ 41 % of the time in summer and 21 % of the time in winter for a total of 97 days during the two seasons. The season-averaged daily cycles show that the CBL height reaches the JFJ only during short periods (4 % of the time, but on 20 individual days in summer and never during winter. During summer in particular, the CBL and the CAL modify the

  19. PathfinderTURB: an automatic boundary layer algorithm. Development, validation and application to study the impact on in situ measurements at the Jungfraujoch

    Science.gov (United States)

    Poltera, Yann; Martucci, Giovanni; Collaud Coen, Martine; Hervo, Maxime; Emmenegger, Lukas; Henne, Stephan; Brunner, Dominik; Haefele, Alexander

    2017-08-01

    We present the development of the PathfinderTURB algorithm for the analysis of ceilometer backscatter data and the real-time detection of the vertical structure of the planetary boundary layer. Two aerosol layer heights are retrieved by PathfinderTURB: the convective boundary layer (CBL) and the continuous aerosol layer (CAL). PathfinderTURB combines the strengths of gradient- and variance-based methods and addresses the layer attribution problem by adopting a geodesic approach. The algorithm has been applied to 1 year of data measured by two ceilometers of type CHM15k, one operated at the Aerological Observatory of Payerne (491 m a.s.l.) on the Swiss plateau and one at the Kleine Scheidegg (2061 m a.s.l.) in the Swiss Alps. The retrieval of the CBL has been validated at Payerne using two reference methods: (1) manual detections of the CBL height performed by human experts using the ceilometer backscatter data; (2) values of CBL heights calculated using the Richardson's method from co-located radio sounding data. We found average biases as small as 27 m (53 m) with respect to reference method 1 (method 2). Based on the excellent agreement between the two reference methods, PathfinderTURB has been applied to the ceilometer data at the mountainous site of the Kleine Scheidegg for the period September 2014 to November 2015. At this site, the CHM15k is operated in a tilted configuration at 71° zenith angle to probe the atmosphere next to the Sphinx Observatory (3580 m a.s.l.) on the Jungfraujoch (JFJ). The analysis of the retrieved layers led to the following results: the CAL reaches the JFJ 41 % of the time in summer and 21 % of the time in winter for a total of 97 days during the two seasons. The season-averaged daily cycles show that the CBL height reaches the JFJ only during short periods (4 % of the time), but on 20 individual days in summer and never during winter. During summer in particular, the CBL and the CAL modify the air sampled in situ at JFJ, resulting

  20. Parameterization of L-, C- and X-band Radiometer-based Soil Moisture Retrieval Algorithm Using In-situ Validation Sites

    Science.gov (United States)

    Gao, Y.; Colliander, A.; Burgin, M. S.; Walker, J. P.; Chae, C. S.; Dinnat, E.; Cosh, M. H.; Caldwell, T. G.

    2017-12-01

    Passive microwave remote sensing has become an important technique for global soil moisture estimation over the past three decades. A number of missions carrying sensors at different frequencies that are capable for soil moisture retrieval have been launched. Among them, there are Japan Aerospace Exploration Agency's (JAXA's) Advanced Microwave Scanning Radiometer-EOS (AMSR-E) launched in May 2002 on the National Aeronautics and Space Administration (NASA) Aqua satellite (ceased operation in October 2011), European Space Agency's (ESA's) Soil Moisture and Ocean Salinity (SMOS) mission launched in November 2009, JAXA's Advanced Microwave Scanning Radiometer 2 (AMSR2) onboard the GCOM-W satellite launched in May 2012, and NASA's Soil Moisture Active Passive (SMAP) mission launched in January 2015. Therefore, there is an opportunity to develop a consistent inter-calibrated long-term soil moisture data record based on the availability of these four missions. This study focuses on the parametrization of the tau-omega model at L-, C- and X-band using the brightness temperature (TB) observations from the four missions and the in-situ soil moisture and soil temperature data from core validation sites across various landcover types. The same ancillary data sets as the SMAP baseline algorithm are applied for retrieval at different frequencies. Preliminary comparison of SMAP and AMSR2 TB observations against forward-simulated TB at the Yanco site in Australia showed a generally good agreement with each other and higher correlation for the vertical polarization (R=0.96 for L-band and 0.93 for C- and X-band). Simultaneous calibrations of the vegetation parameter b and roughness parameter h at both horizontal and vertical polarizations are also performed. Finally, a set of model parameters for successfully retrieving soil moisture at different validation sites at L-, C- and X-band respectively are presented. The research described in this paper is supported by the Jet Propulsion

  1. Experimental validation of plant peroxisomal targeting prediction algorithms by systematic comparison of in vivo import efficiency and in vitro PTS1 binding affinity.

    Science.gov (United States)

    Skoulding, Nicola S; Chowdhary, Gopal; Deus, Mara J; Baker, Alison; Reumann, Sigrun; Warriner, Stuart L

    2015-03-13

    Most peroxisomal matrix proteins possess a C-terminal targeting signal type 1 (PTS1). Accurate prediction of functional PTS1 sequences and their relative strength by computational methods is essential for determination of peroxisomal proteomes in silico but has proved challenging due to high levels of sequence variability of non-canonical targeting signals, particularly in higher plants, and low levels of availability of experimentally validated non-canonical examples. In this study, in silico predictions were compared with in vivo targeting analyses and in vitro thermodynamic binding of mutated variants within the context of one model targeting sequence. There was broad agreement between the methods for entire PTS1 domains and position-specific single amino acid residues, including residues upstream of the PTS1 tripeptide. The hierarchy Leu>Met>Ile>Val at the C-terminal position was determined for all methods but both experimental approaches suggest that Tyr is underweighted in the prediction algorithm due to the absence of this residue in the positive training dataset. A combination of methods better defines the score range that discriminates a functional PTS1. In vitro binding to the PEX5 receptor could discriminate among strong targeting signals while in vivo targeting assays were more sensitive, allowing detection of weak functional import signals that were below the limit of detection in the binding assay. Together, the data provide a comprehensive assessment of the factors driving PTS1 efficacy and provide a framework for the more quantitative assessment of the protein import pathway in higher plants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Predictive models to assess risk of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation using national health data from Kuwait--a cohort study.

    Science.gov (United States)

    Farran, Bassam; Channanath, Arshad Mohamed; Behbehani, Kazem; Thanaraj, Thangavel Alphonse

    2013-05-14

    We build classification models and risk assessment tools for diabetes, hypertension and comorbidity using machine-learning algorithms on data from Kuwait. We model the increased proneness in diabetic patients to develop hypertension and vice versa. We ascertain the importance of ethnicity (and natives vs expatriate migrants) and of using regional data in risk assessment. Retrospective cohort study. Four machine-learning techniques were used: logistic regression, k-nearest neighbours (k-NN), multifactor dimensionality reduction and support vector machines. The study uses fivefold cross validation to obtain generalisation accuracies and errors. Kuwait Health Network (KHN) that integrates data from primary health centres and hospitals in Kuwait. 270 172 hospital visitors (of which, 89 858 are diabetic, 58 745 hypertensive and 30 522 comorbid) comprising Kuwaiti natives, Asian and Arab expatriates. Incident type 2 diabetes, hypertension and comorbidity. Classification accuracies of >85% (for diabetes) and >90% (for hypertension) are achieved using only simple non-laboratory-based parameters. Risk assessment tools based on k-NN classification models are able to assign 'high' risk to 75% of diabetic patients and to 94% of hypertensive patients. Only 5% of diabetic patients are seen assigned 'low' risk. Asian-specific models and assessments perform even better. Pathological conditions of diabetes in the general population or in hypertensive population and those of hypertension are modelled. Two-stage aggregate classification models and risk assessment tools, built combining both the component models on diabetes (or on hypertension), perform better than individual models. Data on diabetes, hypertension and comorbidity from the cosmopolitan State of Kuwait are available for the first time. This enabled us to apply four different case-control models to assess risks. These tools aid in the preliminary non-intrusive assessment of the population. Ethnicity is seen significant

  3. Chronic kidney disease of nontraditional etiology in Central America: a provisional epidemiologic case definition for surveillance and epidemiologic studies.

    Science.gov (United States)

    Lozier, Matthew; Turcios-Ruiz, Reina Maria; Noonan, Gary; Ordunez, Pedro

    2016-11-01

    SYNOPSIS Over the last two decades, experts have reported a rising number of deaths caused by chronic kidney disease (CKD) along the Pacific coast of Central America, from southern Mexico to Costa Rica. However, this specific disease is not associated with traditional causes of CKD, such as aging, diabetes, or hypertension. Rather, this disease is a chronic interstitial nephritis termed chronic kidney disease of nontraditional etiology (CKDnT). According to the Pan American Health Organization (PAHO) mortality database, there are elevated rates of deaths related to kidney disease in many of these countries, with the highest rates being reported in El Salvador and Nicaragua. This condition has been identified in certain agricultural communities, predominantly among male farmworkers. Since CKD surveillance systems in Central America are under development or nonexistent, experts and governmental bodies have recommended creating standardized case definitions for surveillance purposes to monitor and characterize this epidemiological situation. A group of experts from Central American ministries of health, the U.S. Centers for Disease Control and Prevention (CDC), and PAHO held a workshop in Guatemala to discuss CKDnT epidemiologic case definitions. In this paper, we propose that CKD in general be identified by the standard definition internationally accepted and that a suspect case of CKDnT be defined as a person age CKDnT is defined as a suspect case with the same findings confirmed three or more months later.

  4. Validation of a Step Detection Algorithm during Straight Walking and Turning in Patients with Parkinson’s Disease and Older Adults Using an Inertial Measurement Unit at the Lower Back

    Directory of Open Access Journals (Sweden)

    Minh H. Pham

    2017-09-01

    Full Text Available IntroductionInertial measurement units (IMUs positioned on various body locations allow detailed gait analysis even under unconstrained conditions. From a medical perspective, the assessment of vulnerable populations is of particular relevance, especially in the daily-life environment. Gait analysis algorithms need thorough validation, as many chronic diseases show specific and even unique gait patterns. The aim of this study was therefore to validate an acceleration-based step detection algorithm for patients with Parkinson’s disease (PD and older adults in both a lab-based and home-like environment.MethodsIn this prospective observational study, data were captured from a single 6-degrees of freedom IMU (APDM (3DOF accelerometer and 3DOF gyroscope worn on the lower back. Detection of heel strike (HS and toe off (TO on a treadmill was validated against an optoelectronic system (Vicon (11 PD patients and 12 older adults. A second independent validation study in the home-like environment was performed against video observation (20 PD patients and 12 older adults and included step counting during turning and non-turning, defined with a previously published algorithm.ResultsA continuous wavelet transform (cwt-based algorithm was developed for step detection with very high agreement with the optoelectronic system. HS detection in PD patients/older adults, respectively, reached 99/99% accuracy. Similar results were obtained for TO (99/100%. In HS detection, Bland–Altman plots showed a mean difference of 0.002 s [95% confidence interval (CI −0.09 to 0.10] between the algorithm and the optoelectronic system. The Bland–Altman plot for TO detection showed mean differences of 0.00 s (95% CI −0.12 to 0.12. In the home-like assessment, the algorithm for detection of occurrence of steps during turning reached 90% (PD patients/90% (older adults sensitivity, 83/88% specificity, and 88/89% accuracy. The detection of steps during non-turning phases

  5. Development and validation of an intelligent algorithm for synchronizing a low-environmental-impact electricity supply with a building’s electricity consumption

    OpenAIRE

    Schafer, Thibaut; Niederhauser, Elena-Lavinia; Magnin, Gabriel; Vuarnoz, Didier

    2018-01-01

    Standard algorithm of building’s energy strategy often use electricity and its tariff as the sole criterion of choice. This paper introduced an algorithmic regulation using global warming potential (GWP) of energy flux, to select which installation will satisfy the building energy demand (BED). In the frame of the Correlation Carbon project conducted by the Smart Living Lab (SLL), a research center dedicated to the building of the future, this paper presents the algorithm behind the design, t...

  6. Methods of applying the 1994 case definition of chronic fatigue syndrome - impact on classification and observed illness characteristics.

    Science.gov (United States)

    Unger, E R; Lin, J-M S; Tian, H; Gurbaxani, B M; Boneva, R S; Jones, J F

    2016-01-01

    Multiple case definitions are in use to identify chronic fatigue syndrome (CFS). Even when using the same definition, methods used to apply definitional criteria may affect results. The Centers for Disease Control and Prevention (CDC) conducted two population-based studies estimating CFS prevalence using the 1994 case definition; one relied on direct questions for criteria of fatigue, functional impairment and symptoms (1997 Wichita; Method 1), and the other used subscale score thresholds of standardized questionnaires for criteria (2004 Georgia; Method 2). Compared to previous reports the 2004 CFS prevalence estimate was higher, raising questions about whether changes in the method of operationalizing affected this and illness characteristics. The follow-up of the Georgia cohort allowed direct comparison of both methods of applying the 1994 case definition. Of 1961 participants (53 % of eligible) who completed the detailed telephone interview, 919 (47 %) were eligible for and 751 (81 %) underwent clinical evaluation including medical/psychiatric evaluations. Data from the 499 individuals with complete data and without exclusionary conditions was available for this analysis. A total of 86 participants were classified as CFS by one or both methods; 44 cases identified by both methods, 15 only identified by Method 1, and 27 only identified by Method 2 (Kappa 0.63; 95 % confidence interval [CI]: 0.53, 0.73 and concordance 91.59 %). The CFS group identified by both methods were more fatigued, had worse functioning, and more symptoms than those identified by only one method. Moderate to severe depression was noted in only one individual who was classified as CFS by both methods. When comparing the CFS groups identified by only one method, those only identified by Method 2 were either similar to or more severely affected in fatigue, function, and symptoms than those only identified by Method 1. The two methods demonstrated substantial concordance. While Method 2

  7. Use of diagnosis codes for detection of clinically significant opioid poisoning in the emergency department: A retrospective analysis of a surveillance case definition.

    Science.gov (United States)

    Reardon, Joseph M; Harmon, Katherine J; Schult, Genevieve C; Staton, Catherine A; Waller, Anna E

    2016-02-08

    Although fatal opioid poisonings tripled from 1999 to 2008, data describing nonfatal poisonings are rare. Public health authorities are in need of tools to track opioid poisonings in near real time. We determined the utility of ICD-9-CM diagnosis codes for identifying clinically significant opioid poisonings in a state-wide emergency department (ED) surveillance system. We sampled visits from four hospitals from July 2009 to June 2012 with diagnosis codes of 965.00, 965.01, 965.02 and 965.09 (poisoning by opiates and related narcotics) and/or an external cause of injury code of E850.0-E850.2 (accidental poisoning by opiates and related narcotics), and developed a novel case definition to determine in which cases opioid poisoning prompted the ED visit. We calculated the percentage of visits coded for opioid poisoning that were clinically significant and compared it to the percentage of visits coded for poisoning by non-opioid agents in which there was actually poisoning by an opioid agent. We created a multivariate regression model to determine if other collected triage data can improve the positive predictive value of diagnosis codes alone for detecting clinically significant opioid poisoning. 70.1 % of visits (Standard Error 2.4 %) coded for opioid poisoning were primarily prompted by opioid poisoning. The remainder of visits represented opioid exposure in the setting of other primary diseases. Among non-opioid poisoning codes reviewed, up to 36 % were reclassified as an opioid poisoning. In multivariate analysis, only naloxone use improved the positive predictive value of ICD-9-CM codes for identifying clinically significant opioid poisoning, but was associated with a high false negative rate. This surveillance mechanism identifies many clinically significant opioid overdoses with a high positive predictive value. With further validation, it may help target control measures such as prescriber education and pharmacy monitoring.

  8. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study

    Directory of Open Access Journals (Sweden)

    Bobo William V

    2012-08-01

    Full Text Available Abstract Background We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. Methods The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6–24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. Results The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%, of which 41 (89.1% met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. Conclusion These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes.

  9. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study.

    Science.gov (United States)

    Bobo, William V; Cooper, William O; Stein, C Michael; Olfson, Mark; Mounsey, Jackie; Daugherty, James; Ray, Wayne A

    2012-08-24

    We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6-24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%), of which 41 (89.1%) met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes.

  10. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study

    Science.gov (United States)

    2012-01-01

    Background We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. Methods The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6–24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. Results The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%), of which 41 (89.1%) met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. Conclusion These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes. PMID:22920280

  11. Validation of the Gate simulation platform in single photon emission computed tomography and application to the development of a complete 3-dimensional reconstruction algorithm; Validation de la plate-forme de simulation GATE en tomographie a emission monophotonique et application au developpement d'un algorithme de reconstruction 3D complete

    Energy Technology Data Exchange (ETDEWEB)

    Lazaro, D

    2003-10-01

    Monte Carlo simulations are currently considered in nuclear medical imaging as a powerful tool to design and optimize detection systems, and also to assess reconstruction algorithms and correction methods for degrading physical effects. Among the many simulators available, none of them is considered as a standard in nuclear medical imaging: this fact has motivated the development of a new generic Monte Carlo simulation platform (GATE), based on GEANT4 and dedicated to SPECT/PET (single photo emission computed tomography / positron emission tomography) applications. We participated during this thesis to the development of the GATE platform within an international collaboration. GATE was validated in SPECT by modeling two gamma cameras characterized by a different geometry, one dedicated to small animal imaging and the other used in a clinical context (Philips AXIS), and by comparing the results obtained with GATE simulations with experimental data. The simulation results reproduce accurately the measured performances of both gamma cameras. The GATE platform was then used to develop a new 3-dimensional reconstruction method: F3DMC (fully 3-dimension Monte-Carlo) which consists in computing with Monte Carlo simulation the transition matrix used in an iterative reconstruction algorithm (in this case, ML-EM), including within the transition matrix the main physical effects degrading the image formation process. The results obtained with the F3DMC method were compared to the results obtained with three other more conventional methods (FBP, MLEM, MLEMC) for different phantoms. The results of this study show that F3DMC allows to improve the reconstruction efficiency, the spatial resolution and the signal to noise ratio with a satisfactory quantification of the images. These results should be confirmed by performing clinical experiments and open the door to a unified reconstruction method, which could be applied in SPECT but also in PET. (author)

  12. Relationship between autonomic cardiovascular control, case definition, clinical symptoms, and functional disability in adolescent chronic fatigue syndrome: an exploratory study.

    Science.gov (United States)

    Wyller, Vegard B; Helland, Ingrid B

    2013-02-07

    Chronic Fatigue Syndrome (CFS) is characterized by severe impairment and multiple symptoms. Autonomic dysregulation has been demonstrated in several studies. We aimed at exploring the relationship between indices of autonomic cardiovascular control, the case definition from Centers for Disease Control and Prevention (CDC criteria), important clinical symptoms, and disability in adolescent chronic fatigue syndrome. 38 CFS patients aged 12-18 years were recruited according to a wide case definition (ie. not requiring accompanying symptoms) and subjected to head-up tilt test (HUT) and a questionnaire. The relationships between variables were explored with multiple linear regression analyses. In the final models, disability was positively associated with symptoms of cognitive impairments (p<0.001), hypersensitivity (p<0.001), fatigue (p=0.003) and age (p=0.007). Symptoms of cognitive impairments were associated with age (p=0.002), heart rate (HR) at baseline (p=0.01), and HR response during HUT (p=0.02). Hypersensitivity was associated with HR response during HUT (p=0.001), high-frequency variability of heart rate (HF-RRI) at baseline (p=0.05), and adherence to the CDC criteria (p=0.005). Fatigue was associated with gender (p=0.007) and adherence to the CDC criteria (p=0.04). In conclusion, a) The disability of CFS patients is not only related to fatigue but to other symptoms as well; b) Altered cardiovascular autonomic control is associated with certain symptoms; c) The CDC criteria are poorly associated with disability, symptoms, and indices of altered autonomic nervous activity.

  13. Chronic kidney disease of nontraditional etiology in Central America: a provisional epidemiologic case definition for surveillance and epidemiologic studies

    Directory of Open Access Journals (Sweden)

    Matthew Lozier

    Full Text Available SYNOPSIS Over the last two decades, experts have reported a rising number of deaths caused by chronic kidney disease (CKD along the Pacific coast of Central America, from southern Mexico to Costa Rica. However, this specific disease is not associated with traditional causes of CKD, such as aging, diabetes, or hypertension. Rather, this disease is a chronic interstitial nephritis termed chronic kidney disease of nontraditional etiology (CKDnT. According to the Pan American Health Organization (PAHO mortality database, there are elevated rates of deaths related to kidney disease in many of these countries, with the highest rates being reported in El Salvador and Nicaragua. This condition has been identified in certain agricultural communities, predominantly among male farmworkers. Since CKD surveillance systems in Central America are under development or nonexistent, experts and governmental bodies have recommended creating standardized case definitions for surveillance purposes to monitor and characterize this epidemiological situation. A group of experts from Central American ministries of health, the U.S. Centers for Disease Control and Prevention (CDC, and PAHO held a workshop in Guatemala to discuss CKDnT epidemiologic case definitions. In this paper, we propose that CKD in general be identified by the standard definition internationally accepted and that a suspect case of CKDnT be defined as a person age < 60 years with CKD, without type 1 diabetes mellitus, hypertensive diseases, and other well-known causes of CKD. A probable case of CKDnT is defined as a suspect case with the same findings confirmed three or more months later.

  14. Validation of a non-uniform meshing algorithm for the 3D-FDTD method by means of a two-wire crosstalk experimental set-up

    Directory of Open Access Journals (Sweden)

    Raúl Esteban Jiménez-Mejía

    2015-06-01

    Full Text Available This paper presents an algorithm used to automatically mesh a 3D computational domain in order to solve electromagnetic interaction scenarios by means of the Finite-Difference Time-Domain -FDTD-  Method. The proposed algorithm has been formulated in a general mathematical form, where convenient spacing functions can be defined for the problem space discretization, allowing the inclusion of small sized objects in the FDTD method and the calculation of detailed variations of the electromagnetic field at specified regions of the computation domain. The results obtained by using the FDTD method with the proposed algorithm have been contrasted not only with a typical uniform mesh algorithm, but also with experimental measurements for a two-wire crosstalk set-up, leading to excellent agreement between theoretical and experimental waveforms. A discussion about the advantages of the non-uniform mesh over the uniform one is also presented.

  15. Parkinson's disease-related fatigue: A case definition and recommendations for clinical research.

    Science.gov (United States)

    Kluger, Benzi M; Herlofson, Karen; Chou, Kelvin L; Lou, Jau-Shin; Goetz, Christopher G; Lang, Anthony E; Weintraub, Daniel; Friedman, Joseph

    2016-05-01

    Fatigue is one of the most common and disabling symptoms in Parkinson's disease (PD). Since fatigue was first described as a common feature of PD 20 years ago, little progress has been made in understanding its causes or treatment. Importantly, PD patients attending the 2013 World Parkinson Congress voted fatigue as the leading symptom in need of further research. In response, the Parkinson Disease Foundation and ProjectSpark assembled an international team of experts to create recommendations for clinical research to advance this field. The working group identified several areas in which shared standards would improve research quality and foster progress including terminology, diagnostic criteria, and measurement. Terminology needs to (1) clearly distinguish fatigue from related phenomena (eg, sleepiness, apathy, depression); (2) differentiate subjective fatigue complaints from objective performance fatigability; and (3) specify domains affected by fatigue and causal factors. We propose diagnostic criteria for PD-related fatigue to guide participant selection for clinical trials and add rigor to mechanistic studies. Recommendations are made for measurement of subjective fatigue complaints, performance fatigability, and neurophysiologic changes. We also suggest areas in which future research is needed to address methodological issues and validate or optimize current practices. Many limitations in current PD-related fatigue research may be addressed by improving methodological standards, many of which are already being successfully applied in clinical fatigue research in other medical conditions (eg, cancer, multiple sclerosis). © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  16. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  17. Derivation and validation of the Personal Support Algorithm: an evidence-based framework to inform allocation of personal support services in home and community care.

    Science.gov (United States)

    Sinn, Chi-Ling Joanna; Jones, Aaron; McMullan, Janet Legge; Ackerman, Nancy; Curtin-Telegdi, Nancy; Eckel, Leslie; Hirdes, John P

    2017-11-25

    Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.

  18. Derivation and validation of the Personal Support Algorithm: an evidence-based framework to inform allocation of personal support services in home and community care

    Directory of Open Access Journals (Sweden)

    Chi-Ling Joanna Sinn

    2017-11-01

    Full Text Available Abstract Background Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. Methods The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. Results The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. Conclusions The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.

  19. Feasibility and validity of using WHO adolescent job aid algorithms by health workers for reproductive morbidities among adolescent girls in rural North India.

    Science.gov (United States)

    Archana, Siddaiah; Nongkrynh, B; Anand, K; Pandav, C S

    2015-09-21

    High prevalence of reproductive morbidities is seen among adolescents in India. Health workers play an important role in providing health services in the community, including the adolescent reproductive health services. A study was done to assess the feasibility of training female health workers (FHWs) in the classification and management of selected adolescent girls' reproductive health problems according to modified WHO algorithms. The study was conducted between Jan-Sept 2011 in Northern India. Thirteen FHWs were trained regarding adolescent girls' reproductive health as per WHO Adolescent Job-Aid booklet. A pre and post-test assessment of the knowledge of the FHWs was carried out. All FHWs were given five modified WHO algorithms to classify and manage common reproductive morbidities among adolescent girls. All the FHWs applied the algorithms on at least ten adolescent girls at their respective sub-centres. Simultaneously, a medical doctor independently applied the same algorithms in all girls. Classification of the condition was followed by relevant management and advice provided in the algorithm. Focus group discussion with the FHWs was carried out to receive their feedback. After training the median score of the FHWs increased from 19.2 to 25.2 (p - 0.0071). Out of 144 girls examined by the FHWs 108 were classified as true positives and 30 as true negatives and agreement as measured by kappa was 0.7 (0.5-0.9). Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 94.3% (88.2-97.4), 78.9% (63.6-88.9), 92.5% (86.0-96.2), and 83.3% (68.1-92.1) respectively. A consistent and significant difference between pre and post training knowledge scores of the FHWs were observed and hence it was possible to use the modified Job Aid algorithms with ease. Limitation of this study was the munber of FHWs trained was small. Issues such as time management during routine work, timing of training, overhead cost of training etc were not

  20. Status of the NPP and J1 NOAA Unique Combined Atmospheric Processing System (NUCAPS): recent algorithm enhancements geared toward validation and near real time users applications.

    Science.gov (United States)

    Gambacorta, A.; Nalli, N. R.; Tan, C.; Iturbide-Sanchez, F.; Wilson, M.; Zhang, K.; Xiong, X.; Barnet, C. D.; Sun, B.; Zhou, L.; Wheeler, A.; Reale, A.; Goldberg, M.

    2017-12-01

    The NOAA Unique Combined Atmospheric Processing System (NUCAPS) is the NOAA operational algorithm to retrieve thermodynamic and composition variables from hyper spectral thermal sounders such as CrIS, IASI and AIRS. The combined use of microwave sounders, such as ATMS, AMSU and MHS, enables full atmospheric sounding of the atmospheric column under all-sky conditions. NUCAPS retrieval products are accessible in near real time (about 1.5 hour delay) through the NOAA Comprehensive Large Array-data Stewardship System (CLASS). Since February 2015, NUCAPS retrievals have been also accessible via Direct Broadcast, with unprecedented low latency of less than 0.5 hours. NUCAPS builds on a long-term, multi-agency investment on algorithm research and development. The uniqueness of this algorithm consists in a number of features that are key in providing highly accurate and stable atmospheric retrievals, suitable for real time weather and air quality applications. Firstly, maximizing the use of the information content present in hyper spectral thermal measurements forms the foundation of the NUCAPS retrieval algorithm. Secondly, NUCAPS is a modular, name-list driven design. It can process multiple hyper spectral infrared sounders (on Aqua, NPP, MetOp and JPSS series) by mean of the same exact retrieval software executable and underlying spectroscopy. Finally, a cloud-clearing algorithm and a synergetic use of microwave radiance measurements enable full vertical sounding of the atmosphere, under all-sky regimes. As we transition toward improved hyper spectral missions, assessing retrieval skill and consistency across multiple platforms becomes a priority for real time users applications. Focus of this presentation is a general introduction on the recent improvements in the delivery of the NUCAPS full spectral resolution upgrade and an overview of the lessons learned from the 2017 Hazardous Weather Test bed Spring Experiment. Test cases will be shown on the use of NPP and Met

  1. Histopathology case definition of naturally acquired Salmonella enterica serovar Dublin infection in young Holstein cattle in the northeastern United States.

    Science.gov (United States)

    Pecoraro, Heidi L; Thompson, Belinda; Duhamel, Gerald E

    2017-11-01

    Salmonella enterica subsp. enterica serovar Dublin ( Salmonella Dublin) is a host-adapted bacterium that causes high morbidity and mortality in dairy cattle worldwide. A retrospective search of archives at the New York Animal Health Diagnostic Center revealed 57 culture-confirmed Salmonella Dublin cases from New York and Pennsylvania in which detailed histology of multiple tissues was available. Tissues routinely submitted by referring veterinarians for histologic evaluation included sections of heart, lungs, liver, spleen, and lymph nodes. Of the 57 S almonella Dublin-positive cases, all were Holstein breed, 53 were female (93%), and 49 (86%) were 90% (45 of 49) of lungs, 90% (28 of 31) of livers, 50% (11 of 22) of spleens, and 62% (18 of 29) of lymph nodes examined had moderate-to-severe inflammation with or without necrosis. Inconstant lesions were seen in 48% (10 of 21) of hearts examined, and consisted of variable inflammatory infiltrates and rare areas of necrosis. We propose a histopathology case definition of Salmonella Dublin in cattle that includes a combination of pulmonary alveolar capillary neutrophilia with or without hepatocellular necrosis and paratyphoid granulomas, splenitis, and lymphadenitis. These findings will assist in the development of improved protocols for the diagnosis of infectious diseases of dairy cattle.

  2. Infection by rhinovirus: similarity of clinical signs included in the case definition of influenza IAn/H1N1.

    Science.gov (United States)

    de Oña Navarro, Maria; Melón García, Santiago; Alvarez-Argüelles, Marta; Fernández-Verdugo, Ana; Boga Riveiro, Jose Antonio

    2012-08-01

    Although new influenza virus (IAn/H1N1) infections are mild and indistinguishable from any other seasonal influenza virus infections, there are few data on comparisons of the clinical features of infection with (IAn/H1N1) and with other respiratory viruses. The incidence, clinical aspects and temporal distribution of those respiratory viruses circulating during flu pandemic period were studied. Respiratory samples from patients with acute influenza-like symptoms were collected from May 2009 to December 2009. Respiratory viruses were detected by conventional culture methods and genome amplification techniques. Although IAn/H1N1 was the virus most frequently detected, several other respiratory viruses co-circulated with IAn/H1N1 during the pandemic period, especially rhinovirus. The similarity between clinical signs included in the clinical case definition for influenza and those caused by other respiratory viruses, particularly rhinovirus, suggest that a high percentage of viral infections were clinically diagnosed as case of influenza. Our study offers useful information to face future pandemics caused by influenza virus, indicating that differential diagnoses are required in order to not overestimate the importance of the pandemic. Copyright © 2011 Elsevier España, S.L. All rights reserved.

  3. Sputum, sex and scanty smears: new case definition may reduce sex disparities in smear-positive tuberculosis.

    Science.gov (United States)

    Ramsay, A; Bonnet, M; Gagnidze, L; Githui, W; Varaine, F; Guérin, P J

    2009-05-01

    Urban clinic, Nairobi. To evaluate the impact of specimen quality and different smear-positive tuberculosis (TB) case (SPC) definitions on SPC detection by sex. Prospective study among TB suspects. A total of 695 patients were recruited: 644 produced > or =1 specimen for microscopy. The male/female sex ratio was 0.8. There were no significant differences in numbers of men and women submitting three specimens (274/314 vs. 339/380, P = 0.43). Significantly more men than women produced a set of three 'good' quality specimens (175/274 vs. 182/339, P = 0.01). Lowering thresholds for definitions to include scanty smears resulted in increases in SPC detection in both sexes; the increase was significantly higher for women. The revised World Health Organization (WHO) case definition was associated with the highest detection rates in women. When analysis was restricted only to patients submitting 'good' quality specimen sets, the difference in detection between sexes was on the threshold for significance (P = 0.05). Higher SPC notification rates in men are commonly reported by TB control programmes. The revised WHO SPC definition may reduce sex disparities in notification. This should be considered when evaluating other interventions aimed at reducing these. Further study is required on the effects of the human immuno-deficiency virus and instructed specimen collection on sex-specific impact of new SPC definition.

  4. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  5. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. Validation of the Welch Allyn SureBP (inflation) and StepBP (deflation) algorithms by AAMI standard testing and BHS data analysis.

    Science.gov (United States)

    Alpert, Bruce S

    2011-04-01

    We evaluated two new Welch Allyn automated blood pressure (BP) algorithms. The first, SureBP, estimates BP during cuff inflation; the second, StepBP, does so during deflation. We followed the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard for testing and data analysis. The data were also analyzed using the British Hypertension Society analysis strategy. We tested children, adolescents, and adults. The requirements of the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard were fulfilled with respect to BP levels, arm sizes, and ages. Association for the Advancement of Medical Instrumentation SP10 Method 1 data analysis was used. The mean±standard deviation for the device readings compared with auscultation by paired, trained, blinded observers in the SureBP mode were -2.14±7.44 mmHg for systolic BP (SBP) and -0.55±5.98 mmHg for diastolic BP (DBP). In the StepBP mode, the differences were -3.61±6.30 mmHg for SBP and -2.03±5.30 mmHg for DBP. Both algorithms achieved an A grade for both SBP and DBP by British Hypertension Society analysis. The SureBP inflation-based algorithm will be available in many new-generation Welch Allyn monitors. Its use will reduce the time it takes to estimate BP in critical patient care circumstances. The device will not need to inflate to excessive suprasystolic BPs to obtain the SBP values. Deflation is rapid once SBP has been determined, thus reducing the total time of cuff inflation and reducing patient discomfort. If the SureBP fails to obtain a BP value, the StepBP algorithm is activated to estimate BP by traditional deflation methodology.

  7. Validation of clinical acceptability of an atlas-based segmentation algorithm for the delineation of organs at risk in head and neck cancer

    Energy Technology Data Exchange (ETDEWEB)

    Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc; Cardoso, M. Jorge; Mendelson, Alex F. [Center for Medical Image Computing, University College London, London WC1E 6BT (United Kingdom); Eminowicz, Gemma; Mendes, Ruheena; Wong, Swee-Ling; D’Souza, Derek [Radiotherapy Department, University College London Hospitals, 235 Euston Road, London NW1 2BU (United Kingdom); Veiga, Catarina [Department of Medical Physics and Bioengineering, University College London, London WC1E 6BT (United Kingdom); Kadir, Timor [Mirada Medical UK, Oxford Center for Innovation, New Road, Oxford OX1 1BY (United Kingdom); Ourselin, Sebastien [Centre for Medical Image Computing, University College London, London WC1E 6BT (United Kingdom)

    2015-09-15

    Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) was then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic

  8. SU-D-202-04: Validation of Deformable Image Registration Algorithms for Head and Neck Adaptive Radiotherapy in Routine Clinical Setting

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L; Pi, Y; Chen, Z; Xu, X [University of Science and Technology of China, Hefei, Anhui (China); Wang, Z [University of Science and Technology of China, Hefei, Anhui (China); The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Long, T; Luo, W; Wang, F [The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China)

    2016-06-15

    Purpose: To evaluate the ROI contours and accumulated dose difference using different deformable image registration (DIR) algorithms for head and neck (H&N) adaptive radiotherapy. Methods: Eight H&N cancer patients were randomly selected from the affiliated hospital. During the treatment, patients were rescanned every week with ROIs well delineated by radiation oncologist on each weekly CT. New weekly treatment plans were also re-designed with consistent dose prescription on the rescanned CT and executed for one week on Siemens CT-on-rails accelerator. At the end, we got six weekly CT scans from CT1 to CT6 including six weekly treatment plans for each patient. The primary CT1 was set as the reference CT for DIR proceeding with the left five weekly CTs using ANACONDA and MORFEUS algorithms separately in RayStation and the external skin ROI was set to be the controlling ROI both. The entire calculated weekly dose were deformed and accumulated on corresponding reference CT1 according to the deformation vector field (DVFs) generated by the two different DIR algorithms respectively. Thus we got both the ANACONDA-based and MORFEUS-based accumulated total dose on CT1 for each patient. At the same time, we mapped the ROIs on CT1 to generate the corresponding ROIs on CT6 using ANACONDA and MORFEUS DIR algorithms. DICE coefficients between the DIR deformed and radiation oncologist delineated ROIs on CT6 were calculated. Results: For DIR accumulated dose, PTV D95 and Left-Eyeball Dmax show significant differences with 67.13 cGy and 109.29 cGy respectively (Table1). For DIR mapped ROIs, PTV, Spinal cord and Left-Optic nerve show difference with −0.025, −0.127 and −0.124 (Table2). Conclusion: Even two excellent DIR algorithms can give divergent results for ROI deformation and dose accumulation. As more and more TPS get DIR module integrated, there is an urgent need to realize the potential risk using DIR in clinical.

  9. Validation of SMOS L1C and L2 Products and Important Parameters of the Retrieval Algorithm in the Skjern River Catchment, Western Denmark

    DEFF Research Database (Denmark)

    Bircher, Simone; Skou, Niels; Kerr, Yann H.

    2013-01-01

    -band Microwave Emission of the Biosphere (L-MEB) model with initial guesses on the two parameters (derived from ECMWF products and ECOCLIMAP Leaf Area Index, respectively) and other auxiliary input. This paper presents the validation work carried out in the Skjern River Catchment, Denmark. L1C/L2 data...

  10. Knowledge-based radiation therapy (KBRT) treatment planning versus planning by experts: validation of a KBRT algorithm for prostate cancer treatment planning

    International Nuclear Information System (INIS)

    Nwankwo, Obioma; Mekdash, Hana; Sihono, Dwi Seno Kuncoro; Wenz, Frederik; Glatting, Gerhard

    2015-01-01

    A knowledge-based radiation therapy (KBRT) treatment planning algorithm was recently developed. The purpose of this work is to investigate how plans that are generated with the objective KBRT approach compare to those that rely on the judgment of the experienced planner. Thirty volumetric modulated arc therapy plans were randomly selected from a database of prostate plans that were generated by experienced planners (expert plans). The anatomical data (CT scan and delineation of organs) of these patients and the KBRT algorithm were given to a novice with no prior treatment planning experience. The inexperienced planner used the knowledge-based algorithm to predict the dose that the OARs receive based on their proximity to the treated volume. The population-based OAR constraints were changed to the predicted doses. A KBRT plan was subsequently generated. The KBRT and expert plans were compared for the achieved target coverage and OAR sparing. The target coverages were compared using the Uniformity Index (UI), while 5 dose-volume points (D 10 , D 30, D 50 , D 70 and D 90 ) were used to compare the OARs (bladder and rectum) doses. Wilcoxon matched-pairs signed rank test was used to check for significant differences (p < 0.05) between both datasets. The KBRT and expert plans achieved mean UI values of 1.10 ± 0.03 and 1.10 ± 0.04, respectively. The Wilcoxon test showed no statistically significant difference between both results. The D 90 , D 70, D 50 , D 30 and D 10 values of the two planning strategies, and the Wilcoxon test results suggests that the KBRT plans achieved a statistically significant lower bladder dose (at D 30 ), while the expert plans achieved a statistically significant lower rectal dose (at D 10 and D 30 ). The results of this study show that the KBRT treatment planning approach is a promising method to objectively incorporate patient anatomical variations in radiotherapy treatment planning

  11. The diagnosis of urinary tract infections in young children (DUTY: protocol for a diagnostic and prospective observational study to derive and validate a clinical algorithm for the diagnosis of UTI in children presenting to primary care with an acute illness

    Directory of Open Access Journals (Sweden)

    Downing Harriet

    2012-07-01

    Full Text Available Abstract Background Urinary tract infection (UTI is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. Methods/design DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted. The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens. We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results most strongly associated with a positive urine culture result. We will

  12. The diagnosis of urinary tract infections in young children (DUTY): protocol for a diagnostic and prospective observational study to derive and validate a clinical algorithm for the diagnosis of UTI in children presenting to primary care with an acute illness.

    Science.gov (United States)

    Downing, Harriet; Thomas-Jones, Emma; Gal, Micaela; Waldron, Cherry-Ann; Sterne, Jonathan; Hollingworth, William; Hood, Kerenza; Delaney, Brendan; Little, Paul; Howe, Robin; Wootton, Mandy; Macgowan, Alastair; Butler, Christopher C; Hay, Alastair D

    2012-07-19

    Urinary tract infection (UTI) is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY) study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted.The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens.We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results) most strongly associated with a positive urine culture result. We will then use economic evaluation to compare the cost

  13. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  14. Can the Cancer-related Fatigue Case-definition Criteria Be Applied to Chronic Medical Illness? A Comparison between Breast Cancer and Systemic Sclerosis.

    Science.gov (United States)

    Kwakkenbos, Linda; Minton, Ollie; Stone, Patrick C; Alexander, Susanna; Baron, Murray; Hudson, Marie; Thombs, Brett D

    2015-07-01

    Fatigue is a crucial determinant of quality of life across rheumatic diseases, but the lack of agreed-upon standards for identifying clinically significant fatigue hinders research and clinical management. Case definition criteria for cancer-related fatigue were proposed for inclusion in the International Classification of Diseases. The objective was to evaluate whether the cancer-related fatigue case definition performed equivalently in women with breast cancer and systemic sclerosis (SSc) and could be used to identify patients with chronic illness-related fatigue. The cancer-related fatigue interview (case definition criteria met if ≥ 5 of 9 fatigue-related symptoms present with functional impairment) was completed by 291 women with SSc and 278 women successfully treated for breast cancer. Differential item functioning was assessed with the multiple indicator multiple cause model. Items 3 (concentration) and 10 (short-term memory) were endorsed significantly less often by women with SSc compared with cancer, controlling for responses on other items. Omitting these 2 items from the case definition and requiring 4 out of the 7 remaining symptoms resulted in a similar overall prevalence of cancer-related fatigue in the cancer sample compared with the original criteria (37.4% vs 37.8%, respectively), with 97.5% of patients diagnosed identically with both definitions. Prevalence of chronic illness-related fatigue was 36.1% in SSc using 4 of 7 symptoms. The cancer-related fatigue criteria can be used equivalently to identify patients with chronic illness-related fatigue when 2 cognitive fatigue symptoms are omitted. Harmonized definitions and measurement of clinically significant fatigue will advance research and clinical management of fatigue in rheumatic diseases and other conditions.

  15. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  16. Sensitivity, Specificity, and Public-Health Utility of Clinical Case Definitions Based on the Signs and Symptoms of Cholera in Africa.

    Science.gov (United States)

    Nadri, Johara; Sauvageot, Delphine; Njanpop-Lafourcade, Berthe-Marie; Baltazar, Cynthia S; Banla Kere, Abiba; Bwire, Godfrey; Coulibaly, Daouda; Kacou N'Douba, Adele; Kagirita, Atek; Keita, Sakoba; Koivogui, Lamine; Landoh, Dadja E; Langa, Jose P; Miwanda, Berthe N; Mutombo Ndongala, Guy; Mwakapeje, Elibariki R; Mwambeta, Jacob L; Mengel, Martin A; Gessner, Bradford D

    2018-04-01

    During 2014, Africa reported more than half of the global suspected cholera cases. Based on the data collected from seven countries in the African Cholera Surveillance Network (Africhol), we assessed the sensitivity, specificity, and positive and negative predictive values of clinical cholera case definitions, including that recommended by the World Health Organization (WHO) using culture confirmation as the gold standard. The study was designed to assess results in real-world field situations in settings with recent cholera outbreaks or endemicity. From June 2011 to July 2015, a total of 5,084 persons with suspected cholera were tested for Vibrio cholerae in seven different countries of which 35.7% had culture confirmation. For all countries combined, the WHO case definition had a sensitivity = 92.7%, specificity = 8.1%, positive predictive value = 36.1%, and negative predictive value = 66.6%. Adding dehydration, vomiting, or rice water stools to the case definition could increase the specificity without a substantial decrease in sensitivity. Future studies could further refine our findings primarily by using more sensitive methods for cholera confirmation.

  17. The impact of alternative diagnoses on the utility of influenza-like illness case definition to detect the 2009 H1N1 pandemic.

    Science.gov (United States)

    Rumoro, Dino P; Bayram, Jamil D; Silva, Julio C; Shah, Shital C; Hallock, Marilyn M; Gibbs, Gillian S; Waddell, Michael J

    2012-01-01

    To investigate the impact of excluding cases with alternative diagnoses on the sensitivity and specificity of the Centers for Disease Control and Prevention's (CDC) influenza-like illness (ILI) case definition in detecting the 2009 H1N1 influenza, using Geographic Utilization of Artificial Intelligence in Real-Time for Disease Identification and Alert Notification, a disease surveillance system. Retrospective cross-sectional study design. Emergency department of an urban tertiary care academic medical center. 1,233 ED cases, which were tested for respiratory viruses from September 5, 2009 to May 5, 2010. The main outcome measures were positive predictive value, negative predictive value, sensitivity, specificity, and accuracy of the ILI case definition (both including and excluding alternative diagnoses) to detect H1N1. There was a significant decrease in sensitivity (chi2 = 9.09, p < 0.001) and significant improvement in specificity (chi2 = 179, p < 0.001), after excluding cases with alternative diagnoses. When early detection of an influenza epidemic is of prime importance, pursuing alternative diagnoses as part of CDC's ILI case definition may not be warranted for public health reporting due to the significant decrease in sensitivity, in addition to the resources required for detecting these alternative diagnoses.

  18. Characteristics of fatal abusive head trauma among children in the USA: 2003-2007: an application of the CDC operational case definition to national vital statistics data.

    Science.gov (United States)

    Parks, Sharyn E; Kegler, Scott R; Annest, Joseph L; Mercy, James A

    2012-06-01

    In March of 2008, an expert panel was convened at the Centers for Disease Control and Prevention to develop code-based case definitions for abusive head trauma (AHT) in children under 5 years of age based on the International Classification of Diseases, 10th Revision (ICD-10) nature and cause of injury codes. This study presents the operational case definition and applies it to US death data. National Center for Health Statistics National Vital Statistics System data on multiple cause-of-death from 2003 to 2007 were examined. Inspection of records with at least one ICD-10 injury/disease code and at least one ICD-10 cause code from the AHT case definition resulted in the identification of 780 fatal AHT cases, with 699 classified as definite/presumptive AHT and 81 classified as probable AHT. The fatal AHT rate was highest among children age definition can help to identify population subgroups at higher risk for AHT defined by year and month of death, age, sex and race/ethnicity. This type of definition may be useful for various epidemiological applications including research and surveillance. These activities can in turn inform further development of prevention activities, including educating parents about the dangers of shaking and strategies for managing infant crying.

  19. [Validation of the modified algorithm for predicting host susceptibility to viruses taking into account susceptibility parameters of primary target cell cultures and natural immunity factors].

    Science.gov (United States)

    Zhukov, V A; Shishkina, L N; Safatov, A S; Sergeev, A A; P'iankov, O V; Petrishchenko, V A; Zaĭtsev, B N; Toporkov, V S; Sergeev, A N; Nesvizhskiĭ, Iu V; Vorob'ev, A A

    2010-01-01

    The paper presents results of testing a modified algorithm for predicting virus ID50 values in a host of interest by extrapolation from a model host taking into account immune neutralizing factors and thermal inactivation of the virus. The method was tested for A/Aichi/2/68 influenza virus in SPF Wistar rats, SPF CD-1 mice and conventional ICR mice. Each species was used as a host of interest while the other two served as model hosts. Primary lung and trachea cells and secretory factors of the rats' airway epithelium were used to measure parameters needed for the purpose of prediction. Predicted ID50 values were not significantly different (p = 0.05) from those experimentally measured in vivo. The study was supported by ISTC/DARPA Agreement 450p.

  20. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  1. SU-E-T-219: Comprehensive Validation of the Electron Monte Carlo Dose Calculation Algorithm in RayStation Treatment Planning System for An Elekta Linear Accelerator with AgilityTM Treatment Head

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yi; Park, Yang-Kyun; Doppke, Karen P. [Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA (United States)

    2015-06-15

    Purpose: This study evaluated the performance of the electron Monte Carlo dose calculation algorithm in RayStation v4.0 for an Elekta machine with Agility™ treatment head. Methods: The machine has five electron energies (6–8 MeV) and five applicators (6×6 to 25×25 cm {sup 2}). The dose (cGy/MU at d{sub max}), depth dose and profiles were measured in water using an electron diode at 100 cm SSD for nine square fields ≥2×2 cm{sup 2} and four complex fields at normal incidence, and a 14×14 cm{sup 2} field at 15° and 30° incidence. The dose was also measured for three square fields ≥4×4 cm{sup 2} at 98, 105 and 110 cm SSD. Using selected energies, the EBT3 radiochromic film was used for dose measurements in slab-shaped inhomogeneous phantoms and a breast phantom with surface curvature. The measured and calculated doses were analyzed using a gamma criterion of 3%/3 mm. Results: The calculated and measured doses varied by <3% for 116 of the 120 points, and <5% for the 4×4 cm{sup 2} field at 110 cm SSD at 9–18 MeV. The gamma analysis comparing the 105 pairs of in-water isodoses passed by >98.1%. The planar doses measured from films placed at 0.5 cm below a lung/tissue layer (12 MeV) and 1.0 cm below a bone/air layer (15 MeV) showed excellent agreement with calculations, with gamma passing by 99.9% and 98.5%, respectively. At the breast-tissue interface, the gamma passing rate is >98.8% at 12–18 MeV. The film results directly validated the accuracy of MU calculation and spatial dose distribution in presence of tissue inhomogeneity and surface curvature - situations challenging for simpler pencil-beam algorithms. Conclusion: The electron Monte Carlo algorithm in RayStation v4.0 is fully validated for clinical use for the Elekta Agility™ machine. The comprehensive validation included small fields, complex fields, oblique beams, extended distance, tissue inhomogeneity and surface curvature.

  2. Validating MODIS Above-Cloud Aerosol Optical Depth Retrieved from Color Ratio Algorithm Using Direct Measurements Made by NASA's Airborne AATS and 4STAR Sensors

    Science.gov (United States)

    Jethva, Hiren; Torres, Omar; Remer, Lorraine; Redemann, Jens; Livingston, John; Dunagan, Stephen; Shinozuka, Yohei; Kacenelenbogen, Meloe; Segal Rozenhaimer, Michal; Spurr, Rob

    2016-01-01

    We present the validation analysis of above-cloud aerosol optical depth (ACAOD) retrieved from the color ratio method applied to MODIS cloudy-sky reflectance measurements using the limited direct measurements made by NASAs airborne Ames Airborne Tracking Sunphotometer (AATS) and Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) sensors. A thorough search of the airborne database collection revealed a total of five significant events in which an airborne sun photometer, coincident with the MODIS overpass, observed partially absorbing aerosols emitted from agricultural biomass burning, dust, and wildfires over a low-level cloud deck during SAFARI-2000, ACE-ASIA 2001, and SEAC4RS 2013 campaigns, respectively. The co-located satellite-airborne match ups revealed a good agreement (root-mean-square difference less than 0.1), with most match ups falling within the estimated uncertainties associated with the MODIS retrievals (about -10 to +50 ). The co-retrieved cloud optical depth was comparable to that of the MODIS operational cloud product for ACE-ASIA and SEAC4RS, however, higher by 30-50% for the SAFARI-2000 case study. The reason for this discrepancy could be attributed to the distinct aerosol optical properties encountered during respective campaigns. A brief discussion on the sources of uncertainty in the satellite-based ACAOD retrieval and co-location procedure is presented. Field experiments dedicated to making direct measurements of aerosols above cloud are needed for the extensive validation of satellite based retrievals.

  3. Physical Validation of GPM Retrieval Algorithms Over Land: An Overview of the Mid-Latitude Continental Convective Clouds Experiment (MC3E)

    Science.gov (United States)

    Petersen, Walter A.; Jensen, Michael P.

    2011-01-01

    The joint NASA Global Precipitation Measurement (GPM) -- DOE Atmospheric Radiation Measurement (ARM) Midlatitude Continental Convective Clouds Experiment (MC3E) was conducted from April 22-June 6, 2011, centered on the DOE-ARM Southern Great Plains Central Facility site in northern Oklahoma. GPM field campaign objectives focused on the collection of airborne and ground-based measurements of warm-season continental precipitation processes to support refinement of GPM retrieval algorithm physics over land, and to improve the fidelity of coupled cloud resolving and land-surface satellite simulator models. DOE ARM objectives were synergistically focused on relating observations of cloud microphysics and the surrounding environment to feedbacks on convective system dynamics, an effort driven by the need to better represent those interactions in numerical modeling frameworks. More specific topics addressed by MC3E include ice processes and ice characteristics as coupled to precipitation at the surface and radiometer signals measured in space, the correlation properties of rainfall and drop size distributions and impacts on dual-frequency radar retrieval algorithms, the transition of cloud water to rain water (e.g., autoconversion processes) and the vertical distribution of cloud water in precipitating clouds, and vertical draft structure statistics in cumulus convection. The MC3E observational strategy relied on NASA ER-2 high-altitude airborne multi-frequency radar (HIWRAP Ka-Ku band) and radiometer (AMPR, CoSMIR; 10-183 GHz) sampling (a GPM "proxy") over an atmospheric column being simultaneously profiled in situ by the University of North Dakota Citation microphysics aircraft, an array of ground-based multi-frequency scanning polarimetric radars (DOE Ka-W, X and C-band; NASA D3R Ka-Ku and NPOL S-bands) and wind-profilers (S/UHF bands), supported by a dense network of over 20 disdrometers and rain gauges, all nested in the coverage of a six-station mesoscale rawinsonde

  4. The Diagnosis of Urinary Tract infection in Young children (DUTY): a diagnostic prospective observational study to derive and validate a clinical algorithm for the diagnosis of urinary tract infection in children presenting to primary care with an acute illness.

    Science.gov (United States)

    Hay, Alastair D; Birnie, Kate; Busby, John; Delaney, Brendan; Downing, Harriet; Dudley, Jan; Durbaba, Stevo; Fletcher, Margaret; Harman, Kim; Hollingworth, William; Hood, Kerenza; Howe, Robin; Lawton, Michael; Lisles, Catherine; Little, Paul; MacGowan, Alasdair; O'Brien, Kathryn; Pickles, Timothy; Rumsby, Kate; Sterne, Jonathan Ac; Thomas-Jones, Emma; van der Voort, Judith; Waldron, Cherry-Ann; Whiting, Penny; Wootton, Mandy; Butler, Christopher C

    2016-07-01

    It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. Multicentre, prospective diagnostic cohort study. Children UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. A total of 7163 children were recruited, of whom 50% were female and 49% were children provided clean-catch samples, 94% of whom were ≥ 2 years old, with 2.2% meeting the UTI definition. Among these, 'clinical diagnosis' correctly identified 46.6% of positive cultures, with 94.7% specificity and an AUROC of 0.77 (95% CI 0.71 to 0.83). Four symptoms, three signs and three dipstick results were independently associated with UTI with an AUROC (95% CI; bootstrap-validated AUROC) of 0.89 (0.85 to 0.95; validated 0.88) for symptoms and signs, increasing to 0.93 (0.90 to 0.97; validated 0.90) with dipstick

  5. Validation of ozone profile retrievals derived from the OMPS LP version 2.5 algorithm against correlative satellite measurements

    Directory of Open Access Journals (Sweden)

    N. A. Kramarova

    2018-05-01

    Full Text Available The Limb Profiler (LP is a part of the Ozone Mapping and Profiler Suite launched on board of the Suomi NPP satellite in October 2011. The LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km. These measurements of scattered solar radiances allow for the retrieval of ozone profiles from cloud tops up to 55 km. The LP started operational observations in April 2012. In this study we evaluate more than 5.5 years of ozone profile measurements from the OMPS LP processed with the new NASA GSFC version 2.5 retrieval algorithm. We provide a brief description of the key changes that had been implemented in this new algorithm, including a pointing correction, new cloud height detection, explicit aerosol correction and a reduction of the number of wavelengths used in the retrievals. The OMPS LP ozone retrievals have been compared with independent satellite profile measurements obtained from the Aura Microwave Limb Sounder (MLS, Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS and Odin Optical Spectrograph and InfraRed Imaging System (OSIRIS. We document observed biases and seasonal differences and evaluate the stability of the version 2.5 ozone record over 5.5 years. Our analysis indicates that the mean differences between LP and correlative measurements are well within required ±10 % between 18 and 42 km. In the upper stratosphere and lower mesosphere (> 43 km LP tends to have a negative bias. We find larger biases in the lower stratosphere and upper troposphere, but LP ozone retrievals have significantly improved in version 2.5 compared to version 2 due to the implemented aerosol correction. In the northern high latitudes we observe larger biases between 20 and 32 km due to the remaining thermal sensitivity issue. Our analysis shows that LP ozone retrievals agree well with the correlative satellite observations in characterizing

  6. Experimental validation of deterministic Acuros XB algorithm for IMRT and VMAT dose calculations with the Radiological Physics Center's head and neck phantom

    International Nuclear Information System (INIS)

    Han Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca

    2012-01-01

    Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H and N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H and N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (D m,m ) and dose-to-water in medium (D w,m ). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic registered EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB Dm,m (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB Dw,m (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB Dm,m met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4-6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H and N phantom. Compared with AAA, AXB results were equal

  7. Underwater tracking of a moving dipole source using an artificial lateral line: algorithm and experimental validation with ionic polymer–metal composite flow sensors

    International Nuclear Information System (INIS)

    Abdulsadda, Ahmad T; Tan, Xiaobo

    2013-01-01

    Motivated by the lateral line system of fish, arrays of flow sensors have been proposed as a new sensing modality for underwater robots. Existing studies on such artificial lateral lines (ALLs) have been mostly focused on the localization of a fixed underwater vibrating sphere (dipole source). In this paper we examine the problem of tracking a moving dipole source using an ALL system. Based on an analytical model for the moving dipole-generated flow field, we formulate a nonlinear estimation problem that aims to minimize the error between the measured and model-predicted magnitudes of flow velocities at the sensor sites, which is subsequently solved with the Gauss–Newton scheme. A sliding discrete Fourier transform (SDFT) algorithm is proposed to efficiently compute the evolving signal magnitudes based on the flow velocity measurements. Simulation indicates that it is adequate and more computationally efficient to use only the signal magnitudes corresponding to the dipole vibration frequency. Finally, experiments conducted with an artificial lateral line consisting of six ionic polymer–metal composite (IPMC) flow sensors demonstrate that the proposed scheme is able to simultaneously locate the moving dipole and estimate its vibration amplitude and traveling speed with small errors. (paper)

  8. A Multi-Center Prospective Study to Validate an Algorithm Using Urine and Plasma Biomarkers for Predicting Gleason ≥3+4 Prostate Cancer on Biopsy

    DEFF Research Database (Denmark)

    Albitar, Maher; Ma, Wanlong; Lund, Lars

    2017-01-01

    a prospective multicenter study recruiting patients from community-based practices. Patients and Methods: Urine and plasma samples from 2528 men were tested prospectively. Results were correlated with biopsy findings, if a biopsy was performed as deemed necessary by the practicing urologist. Of the 2528......Background: Unnecessary biopsies and overdiagnosis of prostate cancer (PCa) remain a serious healthcare problem. We have previously shown that urine- and plasma-based prostate-specific biomarkers when combined can predict high grade prostate cancer (PCa). To further validate this test, we performed...... of high grade prostate cancer with negative predictive value (NPV) of 90% to 97% for Gleason ≥3+4 and between 98% to 99% for Gleason ≥4+3....

  9. Optimisation and validation of a 3D reconstruction algorithm for single photon emission computed tomography by means of GATE simulation platform

    International Nuclear Information System (INIS)

    El Bitar, Ziad

    2006-12-01

    Although time consuming, Monte-Carlo simulations remain an efficient tool enabling to assess correction methods for degrading physical effects in medical imaging. We have optimized and validated a reconstruction method baptized F3DMC (Fully 3D Monte Carlo) in which the physical effects degrading the image formation process were modelled using Monte-Carlo methods and integrated within the system matrix. We used the Monte-Carlo simulation toolbox GATE. We validated GATE in SPECT by modelling the gamma-camera (Philips AXIS) used in clinical routine. Techniques of threshold, filtering by a principal component analysis and targeted reconstruction (functional regions, hybrid regions) were used in order to improve the precision of the system matrix and to reduce the number of simulated photons as well as the time consumption required. The EGEE Grid infrastructures were used to deploy the GATE simulations in order to reduce their computation time. Results obtained with F3DMC were compared with the reconstruction methods (FBP, ML-EM, MLEMC) for a simulated phantom and with the OSEM-C method for the real phantom. Results have shown that the F3DMC method and its variants improve the restoration of activity ratios and the signal to noise ratio. By the use of the grid EGEE, a significant speed-up factor of about 300 was obtained. These results should be confirmed by performing studies on complex phantoms and patients and open the door to a unified reconstruction method, which could be used in SPECT and also in PET. (author)

  10. Development and validation of automatic tools for interactive recurrence analysis in radiation therapy: optimization of treatment algorithms for locally advanced pancreatic cancer.

    Science.gov (United States)

    Kessel, Kerstin A; Habermehl, Daniel; Jäger, Andreas; Floca, Ralf O; Zhang, Lanlan; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E

    2013-06-07

    In radiation oncology recurrence analysis is an important part in the evaluation process and clinical quality assurance of treatment concepts. With the example of 9 patients with locally advanced pancreatic cancer we developed and validated interactive analysis tools to support the evaluation workflow. After an automatic registration of the radiation planning CTs with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence and the distance between the boost and recurrence volume. We calculated the percentage of the recurrence volume within the 80%-isodose volume and compared it to the location of the recurrence within the boost volume, boost + 1 cm, boost + 1.5 cm and boost + 2 cm volumes. Recurrence analysis of 9 patients demonstrated that all recurrences except one occurred within the defined GTV/boost volume; one recurrence developed beyond the field border/outfield. With the defined distance volumes in relation to the recurrences, we could show that 7 recurrent lesions were within the 2 cm radius of the primary tumor. Two large recurrences extended beyond the 2 cm, however, this might be due to very rapid growth and/or late detection of the tumor progression. The main goal of using automatic analysis tools is to reduce time and effort conducting clinical analyses. We showed a first approach and use of a semi-automated workflow for recurrence analysis, which will be continuously optimized. In conclusion, despite the limitations of the automatic calculations we contributed to in-house optimization of subsequent study concepts based on an improved and validated target volume definition.

  11. SU-F-P-39: End-To-End Validation of a 6 MV High Dose Rate Photon Beam, Configured for Eclipse AAA Algorithm Using Golden Beam Data, for SBRT Treatments Using RapidArc

    Energy Technology Data Exchange (ETDEWEB)

    Ferreyra, M; Salinas Aranda, F; Dodat, D; Sansogne, R; Arbiser, S [Vidt Centro Medico, Ciudad Autonoma De Buenos Aires, Ciudad Autonoma de Buenos Aire (Argentina)

    2016-06-15

    Purpose: To use end-to-end testing to validate a 6 MV high dose rate photon beam, configured for Eclipse AAA algorithm using Golden Beam Data (GBD), for SBRT treatments using RapidArc. Methods: Beam data was configured for Varian Eclipse AAA algorithm using the GBD provided by the vendor. Transverse and diagonals dose profiles, PDDs and output factors down to a field size of 2×2 cm2 were measured on a Varian Trilogy Linac and compared with GBD library using 2% 2mm 1D gamma analysis. The MLC transmission factor and dosimetric leaf gap were determined to characterize the MLC in Eclipse. Mechanical and dosimetric tests were performed combining different gantry rotation speeds, dose rates and leaf speeds to evaluate the delivery system performance according to VMAT accuracy requirements. An end-to-end test was implemented planning several SBRT RapidArc treatments on a CIRS 002LFC IMRT Thorax Phantom. The CT scanner calibration curve was acquired and loaded in Eclipse. PTW 31013 ionization chamber was used with Keithley 35617EBS electrometer for absolute point dose measurements in water and lung equivalent inserts. TPS calculated planar dose distributions were compared to those measured using EPID and MapCheck, as an independent verification method. Results were evaluated with gamma criteria of 2% dose difference and 2mm DTA for 95% of points. Results: GBD set vs. measured data passed 2% 2mm 1D gamma analysis even for small fields. Machine performance tests show results are independent of machine delivery configuration, as expected. Absolute point dosimetry comparison resulted within 4% for the worst case scenario in lung. Over 97% of the points evaluated in dose distributions passed gamma index analysis. Conclusion: Eclipse AAA algorithm configuration of the 6 MV high dose rate photon beam using GBD proved efficient. End-to-end test dose calculation results indicate it can be used clinically for SBRT using RapidArc.

  12. A contrast-oriented algorithm for FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer: derivation from phantom measurements and validation in patient data

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Andrea; Hellwig, Dirk; Kirsch, Carl-Martin; Nestle, Ursula [Saarland University Medical Center, Department of Nuclear Medicine, Homburg (Germany); Kremp, Stephanie; Ruebe, Christian [Saarland University Medical Center, Department of Radiotherapy, Homburg (Germany)

    2008-11-15

    An easily applicable algorithm for the FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer was developed by phantom measurements and validated in patient data. PET scans were performed (ECAT-ART tomograph) on two cylindrical phantoms (phan1, phan2) containing glass spheres of different volumes (7.4-258 ml) which were filled with identical FDG concentrations. Gradually increasing the activity of the fillable background, signal-to-background ratios from 33:1 to 2.5:1 were realised. The mean standardised uptake value (SUV) of the region-of-interest (ROI) surrounded by a 70% isocontour (mSUV{sub 70}) was used to represent the FDG accumulation of each sphere (or tumour). Image contrast was defined as: C=(mSUV{sub 70}-BG)/BG wehre BG is the mean background - SUV. For the spheres of phan1, the threshold SUVs (TS) best matching the known sphere volumes were determined. A regression function representing the relationship between TS/(mSUV{sub 70}-BG) and C was calculated and used for delineation of the spheres in phan2 and the gross tumour volumes (GTVs) of eight primary lung tumours. These GTVs were compared to those defined using CT. The relationship between TS/(mSUV{sub 70}-BG) and C is best described by an inverse regression function which can be converted to the linear relationship TS=a x mSUV{sub 70}+b x BG. Using this algorithm, the volumes delineated in phan2 differed by only -0.4 to +0.7 mm in radius from the true ones, whilst the PET-GTVs differed by only -0.7 to +1.2 mm compared with the values determined by CT. By the contrast-oriented algorithm presented in this study, a PET-based delineation of GTVs for primary tumours of lung cancer patients is feasible. (orig.)

  13. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Experimental Validation.

    Science.gov (United States)

    Hyodo, Tomoko; Hori, Masatoshi; Lamb, Peter; Sasaki, Kosuke; Wakayama, Tetsuya; Chiba, Yasutaka; Mochizuki, Teruhito; Murakami, Takamichi

    2017-02-01

    Purpose To assess the ability of fast-kilovolt-peak switching dual-energy computed tomography (CT) by using the multimaterial decomposition (MMD) algorithm to quantify liver fat. Materials and Methods Fifteen syringes that contained various proportions of swine liver obtained from an abattoir, lard in food products, and iron (saccharated ferric oxide) were prepared. Approval of this study by the animal care and use committee was not required. Solid cylindrical phantoms that consisted of a polyurethane epoxy resin 20 and 30 cm in diameter that held the syringes were scanned with dual- and single-energy 64-section multidetector CT. CT attenuation on single-energy CT images (in Hounsfield units) and MMD-derived fat volume fraction (FVF; dual-energy CT FVF) were obtained for each syringe, as were magnetic resonance (MR) spectroscopy measurements by using a 1.5-T imager (fat fraction [FF] of MR spectroscopy). Reference values of FVF (FVF ref ) were determined by using the Soxhlet method. Iron concentrations were determined by inductively coupled plasma optical emission spectroscopy and divided into three ranges (0 mg per 100 g, 48.1-55.9 mg per 100 g, and 92.6-103.0 mg per 100 g). Statistical analysis included Spearman rank correlation and analysis of covariance. Results Both dual-energy CT FVF (ρ = 0.97; P iron. Phantom size had a significant effect on dual-energy CT FVF after controlling for FVF ref (P iron concentrations, the linear coefficients of dual-energy CT FVF decreased and those of MR spectroscopy FF increased (P iron, dual-energy CT FVF led to underestimateion of FVF ref to a lesser degree than FF of MR spectroscopy led to overestimation of FVF ref . © RSNA, 2016 Online supplemental material is available for this article.

  14. Mitochondrial DNA assessment in adipocytes and peripheral blood mononuclear cells of HIV-infected patients with lipodystrophy according to a validated case definition

    NARCIS (Netherlands)

    Casula, M.; van der Valk, M.; Wit, F. W.; Nievaard, M. A.; Reiss, P.

    2007-01-01

    BACKGROUND: Several studies have compared mitochondrial DNA (mtDNA) content in tissue from HIV-1-infected patients on highly active antiretroviral therapy with and without evidence of lipodystrophy, the diagnosis of which was based on subjective clinical assessment. OBJECTIVES: The aim of this study

  15. Characterization of trabecular bone plate-rod microarchitecture using multirow detector CT and the tensor scale: Algorithms, validation, and applications to pilot human studies

    Science.gov (United States)

    Saha, Punam K.; Liu, Yinxiao; Chen, Cheng; Jin, Dakai; Letuchy, Elena M.; Xu, Ziyue; Amelon, Ryan E.; Burns, Trudy L.; Torner, James C.; Levy, Steven M.; Calarge, Chadi A.

    2015-01-01

    Purpose: Osteoporosis is a common bone disease associated with increased risk of low-trauma fractures leading to substantial morbidity, mortality, and financial costs. Clinically, osteoporosis is defined by low bone mineral density (BMD); however, increasing evidence suggests that trabecular bone (TB) microarchitectural quality is an important determinant of bone strength and fracture risk. A tensor scale based algorithm for in vivo characterization of TB plate-rod microarchitecture at the distal tibia using multirow detector CT (MD-CT) imaging is presented and its performance and applications are examined. Methods: The tensor scale characterizes individual TB on the continuum between a perfect plate and a perfect rod and computes their orientation using optimal ellipsoidal representation of local structures. The accuracy of the method was evaluated using computer-generated phantom images at a resolution and signal-to-noise ratio achievable in vivo. The robustness of the method was examined in terms of stability across a wide range of voxel sizes, repeat scan reproducibility, and correlation between TB measures derived by imaging human ankle specimens under ex vivo and in vivo conditions. Finally, the application of the method was evaluated in pilot human studies involving healthy young-adult volunteers (age: 19 to 21 yr; 51 females and 46 males) and patients treated with selective serotonin reuptake inhibitors (SSRIs) (age: 19 to 21 yr; six males and six females). Results: An error of (3.2% ± 2.0%) (mean ± SD), computed as deviation from known measures of TB plate-width, was observed for computer-generated phantoms. An intraclass correlation coefficient of 0.95 was observed for tensor scale TB measures in repeat MD-CT scans where the measures were averaged over a small volume of interest of 1.05 mm diameter with limited smoothing effects. The method was found to be highly stable at different voxel sizes with an error of (2.29% ± 1.56%) at an in vivo voxel size

  16. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  17. A clinical decision support system algorithm for intravenous to oral antibiotic switch therapy: validity, clinical relevance and usefulness in a three-step evaluation study.

    Science.gov (United States)

    Akhloufi, H; Hulscher, M; van der Hoeven, C P; Prins, J M; van der Sijs, H; Melles, D C; Verbon, A

    2018-04-26

    To evaluate a clinical decision support system (CDSS) based on consensus-based intravenous to oral switch criteria, which identifies intravenous to oral switch candidates. A three-step evaluation study of a stand-alone CDSS with electronic health record interoperability was performed at the Erasmus University Medical Centre in the Netherlands. During the first step, we performed a technical validation. During the second step, we determined the sensitivity, specificity, negative predictive value and positive predictive value in a retrospective cohort of all hospitalized adult patients starting at least one therapeutic antibacterial drug between 1 and 16 May 2013. ICU, paediatric and psychiatric wards were excluded. During the last step the clinical relevance and usefulness was prospectively assessed by reports to infectious disease specialists. An alert was considered clinically relevant if antibiotics could be discontinued or switched to oral therapy at the time of the alert. During the first step, one technical error was found. The second step yielded a positive predictive value of 76.6% and a negative predictive value of 99.1%. The third step showed that alerts were clinically relevant in 53.5% of patients. For 43.4% it had already been decided to discontinue or switch the intravenous antibiotics by the treating physician. In 10.1%, the alert resulted in advice to change antibiotic policy and was considered useful. This prospective cohort study shows that the alerts were clinically relevant in >50% (n = 449) and useful in 10% (n = 85). The CDSS needs to be evaluated in hospitals with varying activity of infectious disease consultancy services as this probably influences usefulness.

  18. An Ontology to Improve Transparency in Case Definition and Increase Case Finding of Infectious Intestinal Disease: Database Study in English General Practice.

    Science.gov (United States)

    de Lusignan, Simon; Shinneman, Stacy; Yonova, Ivelina; van Vlymen, Jeremy; Elliot, Alex J; Bolton, Frederick; Smith, Gillian E; O'Brien, Sarah

    2017-09-28

    Infectious intestinal disease (IID) has considerable health impact; there are 2 billion cases worldwide resulting in 1 million deaths and 78.7 million disability-adjusted life years lost. Reported IID incidence rates vary and this is partly because terms such as "diarrheal disease" and "acute infectious gastroenteritis" are used interchangeably. Ontologies provide a method of transparently comparing case definitions and disease incidence rates. This study sought to show how differences in case definition in part account for variation in incidence estimates for IID and how an ontological approach provides greater transparency to IID case finding. We compared three IID case definitions: (1) Royal College of General Practitioners Research and Surveillance Centre (RCGP RSC) definition based on mapping to the Ninth International Classification of Disease (ICD-9), (2) newer ICD-10 definition, and (3) ontological case definition. We calculated incidence rates and examined the contribution of four supporting concepts related to IID: symptoms, investigations, process of care (eg, notification to public health authorities), and therapies. We created a formal ontology using ontology Web language. The ontological approach identified 5712 more cases of IID than the ICD-10 definition and 4482 more than the RCGP RSC definition from an initial cohort of 1,120,490. Weekly incidence using the ontological definition was 17.93/100,000 (95% CI 15.63-20.41), whereas for the ICD-10 definition the rate was 8.13/100,000 (95% CI 6.70-9.87), and for the RSC definition the rate was 10.24/100,000 (95% CI 8.55-12.12). Codes from the four supporting concepts were generally consistent across our three IID case definitions: 37.38% (3905/10,448) (95% CI 36.16-38.5) for the ontological definition, 38.33% (2287/5966) (95% CI 36.79-39.93) for the RSC definition, and 40.82% (1933/4736) (95% CI 39.03-42.66) for the ICD-10 definition. The proportion of laboratory results associated with a positive test

  19. Guillain-Barré syndrome following receipt of influenza A (H1N1) 2009 monovalent vaccine in Korea with an emphasis on Brighton Collaboration case definition.

    Science.gov (United States)

    Choe, Young June; Cho, Heeyeon; Bae, Geun-Ryang; Lee, Jong-Koo

    2011-03-03

    In 2009-2010 season, with ongoing of influenza A (H1N1), employment of mass vaccination has generated concerns in issue of adverse events following immunization (AEFI). This study investigates the clinical and laboratory data of reported cases of Guillain-Barré syndrome (GBS) and Fisher syndrome (FS) following receipt of influenza A (H1N1) 2009 monovalent vaccine to the National Vaccine Injury Compensation Program (NVICP) in Korea, with all cases reviewed under case definition developed by Brighton Collaboration GBS Working Group. Retrospective review of medical records for all suspected cases of GBS ad FS following receipt of influenza A (H1N1) monovalent vaccine reported to NVICP from December 1, 2009, through April 28, 2010 was conducted. Additional analyses were performed for identification of levels of diagnostic certainty according to Brighton Collaboration case definition. Of 29 reported cases, 22 were confirmed to meet Brighton criteria level 1, 2, or 3 for GBS (21) or FS (1). Of those, 2 (9.1%) met level 1, 9 (40.9%) met level 2, and 11 (50.0%) met level 3. The male to female ratio was 2:0 in cases with level 1, 8:1 in cases with level 2, and 3:8 in cases with level 3. The mean age was older in cases with level 1 (54.0 ± 26.9) than that of cases with level 2 (25.6 ± 22.8), and level 3 (13.6 ± 2.4, P=0.005). The median onset interval was longer in cases with level 1 (16 days) than that of cases that met level 2 (12.44 days), and 3 (1.09 days, P=0.019). The Brighton case definition was used to improve the quality of AEFI data in Korea, and was applicable in retrospective review of medical records in cases with GBS and FS after influenza A (H1N1) vaccination. These findings suggest that standardized case definition was feasible in clarifying the AEFI data, and to further increase the understanding of possible relationship of influenza vaccine and GBS. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  1. Evaluation of an influenza-like illness case definition in the diagnosis of influenza among patients with acute febrile illness in Cambodia.

    Science.gov (United States)

    Kasper, Matthew R; Wierzba, Thomas F; Sovann, Ly; Blair, Patrick J; Putnam, Shannon D

    2010-11-07

    Influenza-like illness (ILI) is often defined as fever (>38.0°C) with cough or sore throat. In this study, we tested the sensitivity, specificity, and positive and negative predictive values of this case definition in a Cambodia patient population. Passive clinic-based surveillance was established at nine healthcare centers to identify the causes of acute undifferentiated fever in patients aged two years and older seeking treatment. Fever was defined as tympanic membrane temperature >38°C lasting more than 24 hours and less than 10 days. Influenza virus infections were identified by polymerase chain reaction. From July 2008 to December 2008, 2,639 patients were enrolled. From 884 (33%) patients positive for influenza, 652 presented with ILI and 232 acute fever patients presented without ILI. Analysis by age group identified no significant differences between influenza positive patients from the two groups. Positive predictive values (PPVs) varied during the course of the influenza season and among age groups. The ILI case definition can be used to identify a significant percentage of patients with influenza infection during the influenza season in Cambodia, assisting healthcare providers in its diagnosis and treatment. However, testing samples based on the criteria of fever alone increased our case detection by 34%.

  2. Evaluation of tuberculosis diagnostics in children: 1. Proposed clinical case definitions for classification of intrathoracic tuberculosis disease. Consensus from an expert panel.

    Science.gov (United States)

    Graham, Stephen M; Ahmed, Tahmeed; Amanullah, Farhana; Browning, Renee; Cardenas, Vicky; Casenghi, Martina; Cuevas, Luis E; Gale, Marianne; Gie, Robert P; Grzemska, Malgosia; Handelsman, Ed; Hatherill, Mark; Hesseling, Anneke C; Jean-Philippe, Patrick; Kampmann, Beate; Kabra, Sushil Kumar; Lienhardt, Christian; Lighter-Fisher, Jennifer; Madhi, Shabir; Makhene, Mamodikoe; Marais, Ben J; McNeeley, David F; Menzies, Heather; Mitchell, Charles; Modi, Surbhi; Mofenson, Lynne; Musoke, Philippa; Nachman, Sharon; Powell, Clydette; Rigaud, Mona; Rouzier, Vanessa; Starke, Jeffrey R; Swaminathan, Soumya; Wingfield, Claire

    2012-05-15

    There is a critical need for improved diagnosis of tuberculosis in children, particularly in young children with intrathoracic disease as this represents the most common type of tuberculosis in children and the greatest diagnostic challenge. There is also a need for standardized clinical case definitions for the evaluation of diagnostics in prospective clinical research studies that include children in whom tuberculosis is suspected but not confirmed by culture of Mycobacterium tuberculosis. A panel representing a wide range of expertise and child tuberculosis research experience aimed to develop standardized clinical research case definitions for intrathoracic tuberculosis in children to enable harmonized evaluation of new tuberculosis diagnostic technologies in pediatric populations. Draft definitions and statements were proposed and circulated widely for feedback. An expert panel then considered each of the proposed definitions and statements relating to clinical definitions. Formal group consensus rules were established and consensus was reached for each statement. The definitions presented in this article are intended for use in clinical research to evaluate diagnostic assays and not for individual patient diagnosis or treatment decisions. A complementary article addresses methodological issues to consider for research of diagnostics in children with suspected tuberculosis.

  3. Evaluation of Tuberculosis Diagnostics in Children: 1. Proposed Clinical Case Definitions for Classification of Intrathoracic Tuberculosis Disease. Consensus From an Expert Panel

    Science.gov (United States)

    Graham, Stephen M.; Ahmed, Tahmeed; Amanullah, Farhana; Browning, Renee; Cardenas, Vicky; Casenghi, Martina; Cuevas, Luis E.; Gale, Marianne; Gie, Robert P.; Grzemska, Malgosia; Handelsman, Ed; Hatherill, Mark; Hesseling, Anneke C.; Jean-Philippe, Patrick; Kampmann, Beate; Kabra, Sushil Kumar; Lienhardt, Christian; Lighter-Fisher, Jennifer; Madhi, Shabir; Makhene, Mamodikoe; Marais, Ben J.; McNeeley, David F.; Menzies, Heather; Mitchell, Charles; Modi, Surbhi; Mofenson, Lynne; Musoke, Philippa; Nachman, Sharon; Powell, Clydette; Rigaud, Mona; Rouzier, Vanessa; Starke, Jeffrey R.; Swaminathan, Soumya; Wingfield, Claire

    2012-01-01

    There is a critical need for improved diagnosis of tuberculosis in children, particularly in young children with intrathoracic disease as this represents the most common type of tuberculosis in children and the greatest diagnostic challenge. There is also a need for standardized clinical case definitions for the evaluation of diagnostics in prospective clinical research studies that include children in whom tuberculosis is suspected but not confirmed by culture of Mycobacterium tuberculosis. A panel representing a wide range of expertise and child tuberculosis research experience aimed to develop standardized clinical research case definitions for intrathoracic tuberculosis in children to enable harmonized evaluation of new tuberculosis diagnostic technologies in pediatric populations. Draft definitions and statements were proposed and circulated widely for feedback. An expert panel then considered each of the proposed definitions and statements relating to clinical definitions. Formal group consensus rules were established and consensus was reached for each statement. The definitions presented in this article are intended for use in clinical research to evaluate diagnostic assays and not for individual patient diagnosis or treatment decisions. A complementary article addresses methodological issues to consider for research of diagnostics in children with suspected tuberculosis. PMID:22448023

  4. A validation study of the 2003 American College of Cardiology/European Society of Cardiology and 2011 American College of Cardiology Foundation/American Heart Association risk stratification and treatment algorithms for sudden cardiac death in patients with hypertrophic cardiomyopathy.

    Science.gov (United States)

    O'Mahony, Constantinos; Tome-Esteban, Maite; Lambiase, Pier D; Pantazis, Antonios; Dickie, Shaughan; McKenna, William J; Elliott, Perry M

    2013-04-01

    Sudden cardiac death (SCD) is a common mode of death in hypertrophic cardiomyopathy (HCM), but identification of patients who are at a high risk of SCD is challenging as current risk stratification guidelines have never been formally validated. The objective of this study was to assess the power of the 2003 American College of Cardiology (ACC)/European Society of Cardiology (ESC) and 2011 ACC Foundation (ACCF)/American Heart Association (AHA) SCD risk stratification algorithms to distinguish high risk patients who might be eligible for an implantable cardioverter defibrillator (ICD) from low risk individuals. We studied 1606 consecutively evaluated HCM patients in an observational, retrospective cohort study. Five risk factors (RF) for SCD were assessed: non-sustained ventricular tachycardia, severe left ventricular hypertrophy, family history of SCD, unexplained syncope and abnormal blood pressure response to exercise. During a follow-up period of 11 712 patient years (median 6.6 years), SCD/appropriate ICD shock occurred in 20 (3%) of 660 patients without RF (annual rate 0.45%), 31 (4.8%) of 636 patients with 1 RF (annual rate 0.65%), 27 (10.8%) of 249 patients with 2 RF (annual rate 1.3%), 7 (13.7%) of 51 patients with 3 RF (annual rate 1.9%) and 4 (40%) of 10 patients with ≥4 RF (annual rate 5.0%). The risk of SCD increased with multiple RF (2 RF: HR 2.87, p≤0.001; 3 RF: HR 4.32, p=0.001; ≥4 RF: HR 11.37, p<0.0001), but not with a single RF (HR 1.43 p=0.21). The area under time-dependent receiver operating characteristic curves (representing the probability of correctly identifying a patient at risk of SCD on the basis of RF profile) was 0.63 at 1 year and 0.64 at 5 years for the 2003 ACC/ESC algorithm and 0.61 at 1 year and 0.63 at 5 years for the 2011 ACCF/AHA algorithm. The risk of SCD increases with the aggregation of RF. The 2003 ACC/ESC and 2011 ACCF/AHA guidelines distinguish high from low risk individuals with limited power.

  5. The Diagnosis of Urinary Tract infection in Young children (DUTY): a diagnostic prospective observational study to derive and validate a clinical algorithm for the diagnosis of urinary tract infection in children presenting to primary care with an acute illness.

    Science.gov (United States)

    Hay, Alastair D; Birnie, Kate; Busby, John; Delaney, Brendan; Downing, Harriet; Dudley, Jan; Durbaba, Stevo; Fletcher, Margaret; Harman, Kim; Hollingworth, William; Hood, Kerenza; Howe, Robin; Lawton, Michael; Lisles, Catherine; Little, Paul; MacGowan, Alasdair; O'Brien, Kathryn; Pickles, Timothy; Rumsby, Kate; Sterne, Jonathan Ac; Thomas-Jones, Emma; van der Voort, Judith; Waldron, Cherry-Ann; Whiting, Penny; Wootton, Mandy; Butler, Christopher C

    2016-01-01

    BACKGROUND It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. OBJECTIVES To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. DESIGN Multicentre, prospective diagnostic cohort study. SETTING AND PARTICIPANTS Children < 5 years old presenting to primary care with an acute illness and/or new urinary symptoms. METHODS One hundred and seven clinical characteristics (index tests) were recorded from the child's past medical history, symptoms, physical examination signs and urine dipstick test. Prior to dipstick results clinician opinion of UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. RESULTS A total of 7163 children were recruited, of whom 50% were female and 49% were < 2 years old. Culture results were available for 5017 (70%); 2740 children provided clean-catch samples, 94% of whom were ≥ 2 years old

  6. New Methodology for Optimal Flight Control using Differential Evolution Algorithms applied on the Cessna Citation X Business Aircraft – Part 2. Validation on Aircraft Research Flight Level D Simulator

    Directory of Open Access Journals (Sweden)

    Yamina BOUGHARI

    2017-06-01

    Full Text Available In this paper the Cessna Citation X clearance criteria were evaluated for a new Flight Controller. The Flight Control Law were optimized and designed for the Cessna Citation X flight envelope by combining the Deferential Evolution algorithm, the Linear Quadratic Regulator method, and the Proportional Integral controller during a previous research presented in part 1. The optimal controllers were used to reach satisfactory aircraft’s dynamic and safe flight operations with respect to the augmentation systems’ handling qualities, and design requirements. Furthermore the number of controllers used to control the aircraft in its flight envelope was optimized using the Linear Fractional Representations features. To validate the controller over the whole aircraft flight envelope, the linear stability, eigenvalue, and handling qualities criteria in addition of the nonlinear analysis criteria were investigated during this research to assess the business aircraft for flight control clearance and certification. The optimized gains provide a very good stability margins as the eigenvalue analysis shows that the aircraft has a high stability, and a very good flying qualities of the linear aircraft models are ensured in its entire flight envelope, its robustness is demonstrated with respect to uncertainties due to its mass and center of gravity variations.

  7. Validation of Agent Based Distillation Movement Algorithms

    National Research Council Canada - National Science Library

    Gill, Andrew

    2003-01-01

    Agent based distillations (ABD) are low-resolution abstract models, which can be used to explore questions associated with land combat operations in a short period of time Movement of agents within the EINSTein and MANA ABDs...

  8. Validation of Core Temperature Estimation Algorithm

    Science.gov (United States)

    2016-01-29

    going to heat production [6]. Second, heart rate increases to support the body’s heat dissipation. To dissipate heat, blood vessels near the skin ...vasodilate to increase blood perfusion. Thus, heart rate increases both to support the cardiac output needed both to perform work and to increase skin ...95%) were represented. The data sets also included various hydration states, clothing ensembles, and acclimatization states. Core temperature was

  9. Validation of Core Temperature Estimation Algorithm

    Science.gov (United States)

    2016-01-20

    based on an extended Kalman filter , which was developed using field data from 17 young male U.S. Army soldiers with core temperatures ranging from...CTstart, v) %KFMODEL estimate core temperature from heart rate with Kalman filter % This version supports both batch mode (operate on entire HR time...CTstart = 37.1; % degrees Celsius end if nargin < 3 v = 0; end %Extended Kalman Filter Parameters a = 1; gamma = 0.022^2; b_0 = -7887.1; b_1

  10. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  11. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  12. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  13. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  14. Differential characterization of emerging skin diseases of rainbow trout--a standardized approach to capturing disease characteristics and development of case definitions.

    Science.gov (United States)

    Oidtmann, B; Lapatra, S E; Verner-Jeffreys, D; Pond, M; Peeler, E J; Noguera, P A; Bruno, D W; St-Hilaire, S; Schubiger, C B; Snekvik, K; Crumlish, M; Green, D M; Metselaar, M; Rodger, H; Schmidt-Posthaus, H; Galeotti, M; Feist, S W

    2013-11-01

    Farmed and wild salmonids are affected by a variety of skin conditions, some of which have significant economic and welfare implications. In many cases, the causes are not well understood, and one example is cold water strawberry disease of rainbow trout, also called red mark syndrome, which has been recorded in the UK since 2003. To date, there are no internationally agreed methods for describing these conditions, which has caused confusion for farmers and health professionals, who are often unclear as to whether they are dealing with a new or a previously described condition. This has resulted, inevitably, in delays to both accurate diagnosis and effective treatment regimes. Here, we provide a standardized methodology for the description of skin conditions of rainbow trout of uncertain aetiology. We demonstrate how the approach can be used to develop case definitions, using coldwater strawberry disease as an example. © 2013 Crown copyright.

  15. Optimisation and validation of a 3D reconstruction algorithm for single photon emission computed tomography by means of GATE simulation platform; Optimisation et validation d'un algorithme de reconstruction 3D en Tomographie d'Emission Monophotonique a l'aide de la plate forme de simulation GATE

    Energy Technology Data Exchange (ETDEWEB)

    El Bitar, Ziad [Ecole Doctorale des Sciences Fondamentales, Universite Blaise Pascal, U.F.R de Recherches Scientifiques et Techniques, 34, avenue Carnot - BP 185, 63006 Clermont-Ferrand Cedex (France); Laboratoire de Physique Corpusculaire, CNRS/IN2P3, 63177 Aubiere (France)

    2006-12-15

    Although time consuming, Monte-Carlo simulations remain an efficient tool enabling to assess correction methods for degrading physical effects in medical imaging. We have optimized and validated a reconstruction method baptized F3DMC (Fully 3D Monte Carlo) in which the physical effects degrading the image formation process were modelled using Monte-Carlo methods and integrated within the system matrix. We used the Monte-Carlo simulation toolbox GATE. We validated GATE in SPECT by modelling the gamma-camera (Philips AXIS) used in clinical routine. Techniques of threshold, filtering by a principal component analysis and targeted reconstruction (functional regions, hybrid regions) were used in order to improve the precision of the system matrix and to reduce the number of simulated photons as well as the time consumption required. The EGEE Grid infrastructures were used to deploy the GATE simulations in order to reduce their computation time. Results obtained with F3DMC were compared with the reconstruction methods (FBP, ML-EM, MLEMC) for a simulated phantom and with the OSEM-C method for the real phantom. Results have shown that the F3DMC method and its variants improve the restoration of activity ratios and the signal to noise ratio. By the use of the grid EGEE, a significant speed-up factor of about 300 was obtained. These results should be confirmed by performing studies on complex phantoms and patients and open the door to a unified reconstruction method, which could be used in SPECT and also in PET. (author)

  16. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  17. A dynamic case definition is warranted for adequate notification in an extended epidemic setting: the Dutch Q fever outbreak 2007-2009 as exemplar.

    Science.gov (United States)

    Jaramillo-Gutierrez, G; Wegdam-Blans, M C; ter Schegget, R; Korbeeck, J M; van Aken, R; Bijlmer, H A; Tjhie, J H; Koopmans, M P

    2013-10-10

    Q fever is a notifiable disease in the Netherlands:laboratories are obliged to notify possible cases to the Municipal Health Services. These services then try to reconfirm cases with additional clinical and epidemiological data and provide anonymised reports to the national case register of notifiable diseases. Since the start of the 2007–2009 Dutch Q fever outbreak,notification rules remained unchanged, despite new laboratory insights and altered epidemiology. In this study, we retrospectively analysed how these changes influenced the proportion of laboratory-defined acute Q fever cases (confirmed, probable and possible)that were included in the national case register, during(2009) and after the outbreak (2010 and 2011).The number of laboratory-defined cases notified to the Municipal Health Services was 377 in 2009, 96 in 2010 and 50 in 2011. Of these, 186 (49.3%) in 2009, 12(12.5%) in 2010 and 9 (18.0%) in 2011 were confirmed as acute infection by laboratory interpretation. The proportion of laboratory-defined acute Q fever cases that was reconfirmed by the Municipal Health Services and that were included in the national case register decreased from 90% in 2009, to 22% and 24% in 2010 and 2011, respectively. The decrease was observed in all categories of cases, including those considered to be confirmed by laboratory criteria. Continued use ofa pre-outbreak case definition led to over-reporting of cases to the Municipal Health Services in the post-epidemic years. Therefore we recommend dynamic laboratory notification rules, by reviewing case definitions periodically in an ongoing epidemic, as in the Dutch Q fever outbreak.

  18. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  19. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  20. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  1. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  2. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  3. Muscular Dystrophy Surveillance Tracking and Research Network (MD STARnet): case definition in surveillance for childhood-onset Duchenne/Becker muscular dystrophy.

    Science.gov (United States)

    Mathews, Katherine D; Cunniff, Chris; Kantamneni, Jiji R; Ciafaloni, Emma; Miller, Timothy; Matthews, Dennis; Cwik, Valerie; Druschel, Charlotte; Miller, Lisa; Meaney, F John; Sladky, John; Romitti, Paul A

    2010-09-01

    The Muscular Dystrophy Surveillance Tracking and Research Network (MD STARnet) is a multisite collaboration to determine the prevalence of childhood-onset Duchenne/Becker muscular dystrophy and to characterize health care and health outcomes in this population. MD STARnet uses medical record abstraction to identify patients with Duchenne/Becker muscular dystrophy born January 1, 1982 or later who resided in 1 of the participating sites. Critical diagnostic elements of each abstracted record are reviewed independently by >4 clinicians and assigned to 1 of 6 case definition categories (definite, probable, possible, asymptomatic, female, not Duchenne/Becker muscular dystrophy) by consensus. As of November 2009, 815 potential cases were reviewed. Of the cases included in analysis, 674 (82%) were either ''definite'' or ''probable'' Duchenne/Becker muscular dystrophy. These data reflect a change in diagnostic testing, as case assignment based on genetic testing increased from 67% in the oldest cohort (born 1982-1987) to 94% in the cohort born 2004 to 2009.

  4. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  5. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.

  6. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  7. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  8. Conscious worst case definition for risk assessment, part I: a knowledge mapping approach for defining most critical risk factors in integrative risk management of chemicals and nanomaterials.

    Science.gov (United States)

    Sørensen, Peter B; Thomsen, Marianne; Assmuth, Timo; Grieger, Khara D; Baun, Anders

    2010-08-15

    This paper helps bridge the gap between scientists and other stakeholders in the areas of human and environmental risk management of chemicals and engineered nanomaterials. This connection is needed due to the evolution of stakeholder awareness and scientific progress related to human and environmental health which involves complex methodological demands on risk management. At the same time, the available scientific knowledge is also becoming more scattered across multiple scientific disciplines. Hence, the understanding of potentially risky situations is increasingly multifaceted, which again challenges risk assessors in terms of giving the 'right' relative priority to the multitude of contributing risk factors. A critical issue is therefore to develop procedures that can identify and evaluate worst case risk conditions which may be input to risk level predictions. Therefore, this paper suggests a conceptual modelling procedure that is able to define appropriate worst case conditions in complex risk management. The result of the analysis is an assembly of system models, denoted the Worst Case Definition (WCD) model, to set up and evaluate the conditions of multi-dimensional risk identification and risk quantification. The model can help optimize risk assessment planning by initial screening level analyses and guiding quantitative assessment in relation to knowledge needs for better decision support concerning environmental and human health protection or risk reduction. The WCD model facilitates the evaluation of fundamental uncertainty using knowledge mapping principles and techniques in a way that can improve a complete uncertainty analysis. Ultimately, the WCD is applicable for describing risk contributing factors in relation to many different types of risk management problems since it transparently and effectively handles assumptions and definitions and allows the integration of different forms of knowledge, thereby supporting the inclusion of multifaceted risk

  9. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  10. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  11. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  12. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  13. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  14. New Methodology for Optimal Flight Control using Differential Evolution Algorithms applied on the Cessna Citation X Business Aircraft – Part 2. Validation on Aircraft Research Flight Level D Simulator

    OpenAIRE

    Yamina BOUGHARI; Georges GHAZI; Ruxandra Mihaela BOTEZ; Florian THEEL

    2017-01-01

    In this paper the Cessna Citation X clearance criteria were evaluated for a new Flight Controller. The Flight Control Law were optimized and designed for the Cessna Citation X flight envelope by combining the Deferential Evolution algorithm, the Linear Quadratic Regulator method, and the Proportional Integral controller during a previous research presented in part 1. The optimal controllers were used to reach satisfactory aircraft’s dynamic and safe flight operations with respect to the augme...

  15. The impact of case definition on attention-deficit/hyperactivity disorder prevalence estimates in community-based samples of school-aged children.

    Science.gov (United States)

    McKeown, Robert E; Holbrook, Joseph R; Danielson, Melissa L; Cuffe, Steven P; Wolraich, Mark L; Visser, Susanna N

    2015-01-01

    To determine the impact of varying attention-deficit/hyperactivity disorder (ADHD) diagnostic criteria, including new DSM-5 criteria, on prevalence estimates. Parent and teacher reports identified high- and low-screen children with ADHD from elementary schools in 2 states that produced a diverse overall sample. The parent interview stage included the Diagnostic Interview Schedule for Children-IV (DISC-IV), and up to 4 additional follow-up interviews. Weighted prevalence estimates, accounting for complex sampling, quantified the impact of varying ADHD criteria using baseline and the final follow-up interview data. At baseline 1,060 caregivers were interviewed; 656 had at least 1 follow-up interview. Teachers and parents reported 6 or more ADHD symptoms for 20.5% (95% CI = 18.1%-23.2%) and 29.8% (CI = 24.5%-35.6%) of children respectively, with criteria for impairment and onset by age 7 years (DSM-IV) reducing these proportions to 16.3% (CI = 14.7%-18.0%) and 17.5% (CI = 13.3%-22.8%); requiring at least 4 teacher-reported symptoms reduced the parent-reported prevalence to 8.9% (CI = 7.4%-10.6%). Revising age of onset to 12 years per DSM-5 increased the 8.9% estimate to 11.3% (CI = 9.5%-13.3%), with a similar increase seen at follow-up: 8.2% with age 7 onset (CI = 5.9%-11.2%) versus 13.0% (CI = 7.6%-21.4%) with onset by age 12. Reducing the number of symptoms required for those aged 17 and older increased the overall estimate to 13.1% (CI = 7.7%-21.5%). These findings quantify the impact on prevalence estimates of varying case definition criteria for ADHD. Further research of impairment ratings and data from multiple informants is required to better inform clinicians conducting diagnostic assessments. DSM-5 changes in age of onset and number of symptoms required for older adolescents appear to increase prevalence estimates, although the full impact is uncertain due to the age of our sample. Published by Elsevier Inc.

  16. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  17. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  18. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  19. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  20. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  1. Evolving temporal association rules with genetic algorithms

    OpenAIRE

    Matthews, Stephen G.; Gongora, Mario A.; Hopgood, Adrian A.

    2010-01-01

    A novel framework for mining temporal association rules by discovering itemsets with a genetic algorithm is introduced. Metaheuristics have been applied to association rule mining, we show the efficacy of extending this to another variant - temporal association rule mining. Our framework is an enhancement to existing temporal association rule mining methods as it employs a genetic algorithm to simultaneously search the rule space and temporal space. A methodology for validating the ability of...

  2. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  3. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  4. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  5. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  6. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  7. EOS Terra Validation Program

    Science.gov (United States)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  8. Enhancement of the Daytime MODIS Based Aircraft Icing Potential Algorithm Using Mesoscale Model Data

    National Research Council Canada - National Science Library

    Sherman, Zoe B

    2006-01-01

    .... The algorithm by Alexander (2005) was used to process MODIS imagery on four separate storms in January 2006, and his algorithm was validated using 133 positive and negative pilot reports (PIREPs...

  9. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  10. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  11. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  12. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  13. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  14. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  15. Genetic algorithms for protein threading.

    Science.gov (United States)

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  16. Estimación de la temperatura superficial del mar desde datos satelitales NOAA-AVHRR: validación de algoritmos aplicados a la costa norte de Chile Sea surface temperature estimation from NOAA-AVHRR satellite data: validation of algorithms applied to the northern coast of Chile

    Directory of Open Access Journals (Sweden)

    Juan C Parra

    2011-01-01

    Full Text Available Se aplicaron y compararon tres algoritmos del tipo Split-Window (SW, que permitieron estimar la temperatura superficial del mar desde datos aportados por el sensor Advanced Very High Resolution Radiometer (AVHRR, a bordo de la serie de satélites de la National Oceanic and Atmospheric Administration (NOAA. La validación de los algoritmos fue lograda por comparación con mediciones in situ de temperatura del mar provenientes de una boya hidrográfica, ubicada frente a la costa norte de Chile (21°21'S, 70°6'W; Región de Tarapacá, a 3 km de la costa aproximadamente. Los mejores resultados se obtuvieron por aplicación del algoritmo propuesto por Sobrino & Raissouni (2000. En efecto, diferencias entre la temperatura medida in situ y la estimada por SW, permitieron evidenciar una media y desviación estándar de 0,3° y 0,8°K, respectivamente.The present article applies and compares three split-window (SW algorithms, which allowed the estimation of sea surface temperature using data obtained from the Advanced Very High Resolution Radiometer (AVHRR on board the National Oceanic and Atmospheric Administration (NOAA series of satellites. The algorithms were validated by comparison with in situ measurements of sea temperature obtained from a hydrographical buoy located off the coast of northern Chile (21°21'S, 70°6'W; Tarapacá Región, approximately 3 km from the coast. The best results were obtained by the application of the algorithm proposed by Sobrino & Raissouni (2000. The mean and standard deviation of the differences between the temperatures measured in situ and those estimated by SW were 0.3° and 0.8°K, respectively.

  17. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  18. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  19. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  20. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  1. An explicit multi-time-stepping algorithm for aerodynamic flows

    OpenAIRE

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  2. Improved Harmony Search Algorithm with Chaos for Absolute Value Equation

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    2013-11-01

    Full Text Available In this paper, an improved harmony search with chaos (HSCH is presented for solving NP-hard absolute value equation (AVE Ax - |x| = b, where A is an arbitrary square matrix whose singular values exceed one. The simulation results in solving some given AVE problems demonstrate that the HSCH algorithm is valid and outperforms the classical HS algorithm (CHS and HS algorithm with differential mutation operator (HSDE.

  3. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed

    2014-11-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  4. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem; Benmansour, K.; Boucherit, M. S.; Tadjine, M.

    2014-01-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  5. A generalized global alignment algorithm.

    Science.gov (United States)

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  6. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  7. Performance of the "CCS Algorithm" in real world patients.

    Science.gov (United States)

    LaHaye, Stephen A; Olesen, Jonas B; Lacombe, Shawn P

    2015-06-01

    With the publication of the 2014 Focused Update of the Canadian Cardiovascular Society Guidelines for the Management of Atrial Fibrillation, the Canadian Cardiovascular Society Atrial Fibrillation Guidelines Committee has introduced a new triage and management algorithm; the so-called "CCS Algorithm". The CCS Algorithm is based upon expert opinion of the best available evidence; however, the CCS Algorithm has not yet been validated. Accordingly, the purpose of this study is to evaluate the performance of the CCS Algorithm in a cohort of real world patients. We compared the CCS Algorithm with the European Society of Cardiology (ESC) Algorithm in 172 hospital inpatients who are at risk of stroke due to non-valvular atrial fibrillation in whom anticoagulant therapy was being considered. The CCS Algorithm and the ESC Algorithm were concordant in 170/172 patients (99% of the time). There were two patients (1%) with vascular disease, but no other thromboembolic risk factors, which were classified as requiring oral anticoagulant therapy using the ESC Algorithm, but for whom ASA was recommended by the CCS Algorithm. The CCS Algorithm appears to be unnecessarily complicated in so far as it does not appear to provide any additional discriminatory value above and beyond the use of the ESC Algorithm, and its use could result in under treatment of patients, specifically female patients with vascular disease, whose real risk of stroke has been understated by the Guidelines.

  8. System engineering approach to GPM retrieval algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rose, C. R. (Chris R.); Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  9. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  10. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  11. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  12. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  13. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  14. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  15. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  16. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  17. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  18. Benchmarking homogenization algorithms for monthly data

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  19. Impact of revising the National Nosocomial Infection Surveillance System definition for catheter-related bloodstream infection in ICU: reproducibility of the National Healthcare Safety Network case definition in an Australian cohort of infection control professionals.

    Science.gov (United States)

    Worth, Leon J; Brett, Judy; Bull, Ann L; McBryde, Emma S; Russo, Philip L; Richards, Michael J

    2009-10-01

    Effective and comparable surveillance for central venous catheter-related bloodstream infections (CLABSIs) in the intensive care unit requires a reproducible case definition that can be readily applied by infection control professionals. Using a questionnaire containing clinical cases, reproducibility of the National Nosocomial Infection Surveillance System (NNIS) surveillance definition for CLABSI was assessed in an Australian cohort of infection control professionals participating in the Victorian Hospital Acquired Infection Surveillance System (VICNISS). The same questionnaire was then used to evaluate the reproducibility of the National Healthcare Safety Network (NHSN) surveillance definition for CLABSI. Target hospitals were defined as large metropolitan (1A) or other large hospitals (non-1A), according to the Victorian Department of Human Services. Questionnaire responses of Centers for Disease Control and Prevention NHSN surveillance experts were used as gold standard comparator. Eighteen of 21 eligible VICNISS centers participated in the survey. Overall concordance with the gold standard was 57.1%, and agreement was highest for 1A hospitals (60.6%). The proportion of congruently classified cases varied according to NNIS criteria: criterion 1 (recognized pathogen), 52.8%; criterion 2a (skin contaminant in 2 or more blood cultures), 83.3%; criterion 2b (skin contaminant in 1 blood culture and appropriate antimicrobial therapy instituted), 58.3%; non-CLABSI cases, 51.4%. When survey questions regarding identification of cases of CLABSI criterion 2b were removed (consistent with the current NHSN definition), overall percentage concordance increased to 62.5% (72.2% for 1A centers). Further educational interventions are required to improve the discrimination of primary and secondary causes of bloodstream infection in Victorian intensive care units. Although reproducibility of the CLABSI case definition is relatively poor, adoption of the revised NHSN definition

  20. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives