WorldWideScience

Sample records for validated case-definition algorithm

  1. Validation of an algorithm-based definition of treatment resistance in patients with schizophrenia.

    Science.gov (United States)

    Ajnakina, Olesya; Horsdal, Henriette Thisted; Lally, John; MacCabe, James H; Murray, Robin M; Gasse, Christiane; Wimberley, Theresa

    2018-02-19

    Large-scale pharmacoepidemiological research on treatment resistance relies on accurate identification of people with treatment-resistant schizophrenia (TRS) based on data that are retrievable from administrative registers. This is usually approached by operationalising clinical treatment guidelines by using prescription and hospital admission information. We examined the accuracy of an algorithm-based definition of TRS based on clozapine prescription and/or meeting algorithm-based eligibility criteria for clozapine against a gold standard definition using case notes. We additionally validated a definition entirely based on clozapine prescription. 139 schizophrenia patients aged 18-65years were followed for a mean of 5years after first presentation to psychiatric services in South-London, UK. The diagnostic accuracy of the algorithm-based measure against the gold standard was measured with sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). A total of 45 (32.4%) schizophrenia patients met the criteria for the gold standard definition of TRS; applying the algorithm-based definition to the same cohort led to 44 (31.7%) patients fulfilling criteria for TRS with sensitivity, specificity, PPV and NPV of 62.2%, 83.0%, 63.6% and 82.1%, respectively. The definition based on lifetime clozapine prescription had sensitivity, specificity, PPV and NPV of 40.0%, 94.7%, 78.3% and 76.7%, respectively. Although a perfect definition of TRS cannot be derived from available prescription and hospital registers, these results indicate that researchers can confidently use registries to identify individuals with TRS for research and clinical practices. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Establishment of Valid Laboratory Case Definition for Human Leptospirosis

    NARCIS (Netherlands)

    M.G.A. Goris (Marga); M.M.G. Leeflang (Mariska); K.R. Boer (Kimberly); M. Goeijenbier (Marco); E.C.M. van Gorp (Eric); J.F.P. Wagenaar (Jiri); R.A. Hartskeerl (Rudy)

    2011-01-01

    textabstractLaboratory case definition of leptospirosis is scarcely de ned by a solid evaluation that determines cut-off values in the tests that are applied. This study describes the process of determining optimal cut-off titers of laboratory tests for leptospirosis for a valid case definition of

  3. Validation of a case definition to define hypertension using administrative data.

    Science.gov (United States)

    Quan, Hude; Khan, Nadia; Hemmelgarn, Brenda R; Tu, Karen; Chen, Guanmin; Campbell, Norm; Hill, Michael D; Ghali, William A; McAlister, Finlay A

    2009-12-01

    We validated the accuracy of case definitions for hypertension derived from administrative data across time periods (year 2001 versus 2004) and geographic regions using physician charts. Physician charts were randomly selected in rural and urban areas from Alberta and British Columbia, Canada, during years 2001 and 2004. Physician charts were linked with administrative data through unique personal health number. We reviewed charts of approximately 50 randomly selected patients >35 years of age from each clinic within 48 urban and 16 rural family physician clinics to identify physician diagnoses of hypertension during the years 2001 and 2004. The validity indices were estimated for diagnosed hypertension using 3 years of administrative data for the 8 case-definition combinations. Of the 3,362 patient charts reviewed, the prevalence of hypertension ranged from 18.8% to 33.3%, depending on the year and region studied. The administrative data hypertension definition of "2 claims within 2 years or 1 hospitalization" had the highest validity relative to the other definitions evaluated (sensitivity 75%, specificity 94%, positive predictive value 81%, negative predictive value 92%, and kappa 0.71). After adjustment for age, sex, and comorbid conditions, the sensitivities between regions, years, and provinces were not significantly different, but the positive predictive value varied slightly across geographic regions. These results provide evidence that administrative data can be used as a relatively valid source of data to define cases of hypertension for surveillance and research purposes.

  4. Validation and optimisation of an ICD-10-coded case definition for sepsis using administrative health data

    Science.gov (United States)

    Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J

    2015-01-01

    Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284

  5. Validation of case-finding algorithms derived from administrative data for identifying adults living with human immunodeficiency virus infection.

    Directory of Open Access Journals (Sweden)

    Tony Antoniou

    Full Text Available OBJECTIVE: We sought to validate a case-finding algorithm for human immunodeficiency virus (HIV infection using administrative health databases in Ontario, Canada. METHODS: We constructed 48 case-finding algorithms using combinations of physician billing claims, hospital and emergency room separations and prescription drug claims. We determined the test characteristics of each algorithm over various time frames for identifying HIV infection, using data abstracted from the charts of 2,040 randomly selected patients receiving care at two medical practices in Toronto, Ontario as the reference standard. RESULTS: With the exception of algorithms using only a single physician claim, the specificity of all algorithms exceeded 99%. An algorithm consisting of three physician claims over a three year period had a sensitivity and specificity of 96.2% (95% CI 95.2%-97.9% and 99.6% (95% CI 99.1%-99.8%, respectively. Application of the algorithm to the province of Ontario identified 12,179 HIV-infected patients in care for the period spanning April 1, 2007 to March 31, 2009. CONCLUSIONS: Case-finding algorithms generated from administrative data can accurately identify adults living with HIV. A relatively simple "3 claims in 3 years" definition can be used for assembling a population-based cohort and facilitating future research examining trends in health service use and outcomes among HIV-infected adults in Ontario.

  6. Using wound care algorithms: a content validation study.

    Science.gov (United States)

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  7. Validation of a Syndromic Case Definition for Detecting Emergency Department Visits Potentially Related to Marijuana.

    Science.gov (United States)

    DeYoung, Kathryn; Chen, Yushiuan; Beum, Robert; Askenazi, Michele; Zimmerman, Cali; Davidson, Arthur J

    Reliable methods are needed to monitor the public health impact of changing laws and perceptions about marijuana. Structured and free-text emergency department (ED) visit data offer an opportunity to monitor the impact of these changes in near-real time. Our objectives were to (1) generate and validate a syndromic case definition for ED visits potentially related to marijuana and (2) describe a method for doing so that was less resource intensive than traditional methods. We developed a syndromic case definition for ED visits potentially related to marijuana, applied it to BioSense 2.0 data from 15 hospitals in the Denver, Colorado, metropolitan area for the period September through October 2015, and manually reviewed each case to determine true positives and false positives. We used the number of visits identified by and the positive predictive value (PPV) for each search term and field to refine the definition for the second round of validation on data from February through March 2016. Of 126 646 ED visits during the first period, terms in 524 ED visit records matched ≥1 search term in the initial case definition (PPV, 92.7%). Of 140 932 ED visits during the second period, terms in 698 ED visit records matched ≥1 search term in the revised case definition (PPV, 95.7%). After another revision, the final case definition contained 6 keywords for marijuana or derivatives and 5 diagnosis codes for cannabis use, abuse, dependence, poisoning, and lung disease. Our syndromic case definition and validation method for ED visits potentially related to marijuana could be used by other public health jurisdictions to monitor local trends and for other emerging concerns.

  8. Validation of two case definitions to identify pressure ulcers using hospital administrative data.

    Science.gov (United States)

    Ho, Chester; Jiang, Jason; Eastwood, Cathy A; Wong, Holly; Weaver, Brittany; Quan, Hude

    2017-08-28

    Pressure ulcer development is a quality of care indicator, as pressure ulcers are potentially preventable. Yet pressure ulcer is a leading cause of morbidity, discomfort and additional healthcare costs for inpatients. Methods are lacking for accurate surveillance of pressure ulcer in hospitals to track occurrences and evaluate care improvement strategies. The main study aim was to validate hospital discharge abstract database (DAD) in recording pressure ulcers against nursing consult reports, and to calculate prevalence of pressure ulcers in Alberta, Canada in DAD. We hypothesised that a more inclusive case definition for pressure ulcers would enhance validity of cases identified in administrative data for research and quality improvement purposes. A cohort of patients with pressure ulcers were identified from enterostomal (ET) nursing consult documents at a large university hospital in 2011. There were 1217 patients with pressure ulcers in ET nursing documentation that were linked to a corresponding record in DAD to validate DAD for correct and accurate identification of pressure ulcer occurrence, using two case definitions for pressure ulcer. Using pressure ulcer definition 1 (7 codes), prevalence was 1.4%, and using definition 2 (29 codes), prevalence was 4.2% after adjusting for misclassifications. The results were lower than expected. Definition 1 sensitivity was 27.7% and specificity was 98.8%, while definition 2 sensitivity was 32.8% and specificity was 95.9%. Pressure ulcer in both DAD and ET consultation increased with age, number of comorbidities and length of stay. DAD underestimate pressure ulcer prevalence. Since various codes are used to record pressure ulcers in DAD, the case definition with more codes captures more pressure ulcer cases, and may be useful for monitoring facility trends. However, low sensitivity suggests that this data source may not be accurate for determining overall prevalence, and should be cautiously compared with other

  9. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    Science.gov (United States)

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  10. Analysis of risk factors for schizophrenia with two different case definitions: a nationwide register-based external validation study.

    Science.gov (United States)

    Sørensen, Holger J; Larsen, Janne T; Mors, Ole; Nordentoft, Merete; Mortensen, Preben B; Petersen, Liselotte

    2015-03-01

    Different case definitions of schizophrenia have been used in register based research. However, no previous study has externally validated two different case definitions of schizophrenia against a wide range of risk factors for schizophrenia. We investigated hazard ratios (HRs) for a wide range of risk factors for ICD-10 DCR schizophrenia using a nationwide Danish sample of 2,772,144 residents born in 1955-1997. We compared one contact only (OCO) (the case definition of schizophrenia used in Danish register based studies) with two or more contacts (TMC) (a case definition of at least 2 inpatient contacts with schizophrenia). During the follow-up, the OCO definition included 15,074 and the TMC 7562 cases; i.e. half as many. The TMC case definition appeared to select for a worse illness course. A wide range of risk factors were uniformly associated with both case definitions and only slightly higher risk estimates were found for the TMC definition. Choosing at least 2 inpatient contacts with schizophrenia (TMC) instead of the currently used case definition would result in almost similar risk estimates for many well-established risk factors. However, this would also introduce selection and include considerably fewer cases and reduce power of e.g. genetic studies based on register-diagnosed cases only. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. [Development and validation of an algorithm to identify cancer recurrences from hospital data bases].

    Science.gov (United States)

    Manzanares-Laya, S; Burón, A; Murta-Nascimento, C; Servitja, S; Castells, X; Macià, F

    2014-01-01

    Hospital cancer registries and hospital databases are valuable and efficient sources of information for research into cancer recurrences. The aim of this study was to develop and validate algorithms for the detection of breast cancer recurrence. A retrospective observational study was conducted on breast cancer cases from the cancer registry of a third level university hospital diagnosed between 2003 and 2009. Different probable cancer recurrence algorithms were obtained by linking the hospital databases and the construction of several operational definitions, with their corresponding sensitivity, specificity, positive predictive value and negative predictive value. A total of 1,523 patients were diagnosed of breast cancer between 2003 and 2009. A request for bone gammagraphy after 6 months from the first oncological treatment showed the highest sensitivity (53.8%) and negative predictive value (93.8%), and a pathology test after 6 months after the diagnosis showed the highest specificity (93.8%) and negative predictive value (92.6%). The combination of different definitions increased the specificity and the positive predictive value, but decreased the sensitivity. Several diagnostic algorithms were obtained, and the different definitions could be useful depending on the interest and resources of the researcher. A higher positive predictive value could be interesting for a quick estimation of the number of cases, and a higher negative predictive value for a more exact estimation if more resources are available. It is a versatile and adaptable tool for other types of tumors, as well as for the needs of the researcher. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.

  12. Validation of clinical case definition of acute intussusception in infants in Viet Nam and Australia.

    Science.gov (United States)

    Bines, Julie E; Liem, Nguyen Thanh; Justice, Frances; Son, Tran Ngoc; Carlin, John B; de Campo, Margaret; Jamsen, Kris; Mulholland, Kim; Barnett, Peter; Barnes, Graeme L

    2006-07-01

    To test the sensitivity and specificity of a clinical case definition of acute intussusception in infants to assist health-care workers in settings where diagnostic facilities are not available. Prospective studies were conducted at a major paediatric hospital in Viet Nam (the National Hospital of Pediatrics, Hanoi) from November 2002 to December 2003 and in Australia (the Royal Children's Hospital, Melbourne) from March 2002 to March 2004 using a clinical case definition of intussusception. Diagnosis of intussusception was confirmed by air enema or surgery and validated in a subset of participants by an independent clinician who was blinded to the participant's status. Sensitivity of the definition was evaluated in 584 infants agedclinical features consistent with intussusception but for whom another diagnosis was established (234 infants in Hanoi; 404 in Melbourne). In both locations the definition used was sensitive (96% sensitivity in Hanoi; 98% in Melbourne) and specific (95% specificity in Hanoi; 87% in Melbourne) for intussusception among infants with sufficient data to allow classification (449/533 in Hanoi; 50/51 in Melbourne). Reanalysis of patients with missing data suggests that modifying minor criteria would increase the applicability of the definition while maintaining good sensitivity (96-97%) and specificity (83-89%). The clinical case definition was sensitive and specific for the diagnosis of acute intussusception in infants in both a developing country and a developed country but minor modifications would enable it to be used more widely.

  13. A computer case definition for sudden cardiac death.

    Science.gov (United States)

    Chung, Cecilia P; Murray, Katherine T; Stein, C Michael; Hall, Kathi; Ray, Wayne A

    2010-06-01

    To facilitate studies of medications and sudden cardiac death, we developed and validated a computer case definition for these deaths. The study of community dwelling Tennessee Medicaid enrollees 30-74 years of age utilized a linked database with Medicaid inpatient/outpatient files, state death certificate files, and a state 'all-payers' hospital discharge file. The computerized case definition was developed from a retrospective cohort study of sudden cardiac deaths occurring between 1990 and 1993. Medical records for 926 potential cases had been adjudicated for this study to determine if they met the clinical definition for sudden cardiac death occurring in the community and were likely to be due to ventricular tachyarrhythmias. The computerized case definition included deaths with (1) no evidence of a terminal hospital admission/nursing home stay in any of the data sources; (2) an underlying cause of death code consistent with sudden cardiac death; and (3) no terminal procedures inconsistent with unresuscitated cardiac arrest. This definition was validated in an independent sample of 174 adjudicated deaths occurring between 1994 and 2005. The positive predictive value of the computer case definition was 86.0% in the development sample and 86.8% in the validation sample. The positive predictive value did not vary materially for deaths coded according to the ICO-9 (1994-1998, positive predictive value = 85.1%) or ICD-10 (1999-2005, 87.4%) systems. A computerized Medicaid database, linked with death certificate files and a state hospital discharge database, can be used for a computer case definition of sudden cardiac death. Copyright (c) 2009 John Wiley & Sons, Ltd.

  14. Validation of a case definition to define chronic dialysis using outpatient administrative data.

    Science.gov (United States)

    Clement, Fiona M; James, Matthew T; Chin, Rick; Klarenbach, Scott W; Manns, Braden J; Quinn, Robert R; Ravani, Pietro; Tonelli, Marcello; Hemmelgarn, Brenda R

    2011-03-01

    Administrative health care databases offer an efficient and accessible, though as-yet unvalidated, approach to studying outcomes of patients with chronic kidney disease and end-stage renal disease (ESRD). The objective of this study is to determine the validity of outpatient physician billing derived algorithms for defining chronic dialysis compared to a reference standard ESRD registry. A cohort of incident dialysis patients (Jan. 1-Dec. 31, 2008) and prevalent chronic dialysis patients (Jan 1, 2008) was selected from a geographically inclusive ESRD registry and administrative database. Four administrative data definitions were considered: at least 1 outpatient claim, at least 2 outpatient claims, at least 2 outpatient claims at least 90 days apart, and continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. Measures of agreement of the four administrative data definitions were compared to a reference standard (ESRD registry). Basic patient characteristics are compared between all 5 patient groups. 1,118,097 individuals formed the overall population and 2,227 chronic dialysis patients were included in the ESRD registry. The three definitions requiring at least 2 outpatient claims resulted in kappa statistics between 0.60-0.80 indicating "substantial" agreement. "At least 1 outpatient claim" resulted in "excellent" agreement with a kappa statistic of 0.81. Of the four definitions, the simplest (at least 1 outpatient claim) performed comparatively to other definitions. The limitations of this work are the billing codes used are developed in Canada, however, other countries use similar billing practices and thus the codes could easily be mapped to other systems. Our reference standard ESRD registry may not capture all dialysis patients resulting in some misclassification. The registry is linked to on-going care so this is likely to be minimal. The definition utilized will vary with the research objective.

  15. Validation of a case definition to define chronic dialysis using outpatient administrative data

    Directory of Open Access Journals (Sweden)

    Klarenbach Scott W

    2011-03-01

    Full Text Available Abstract Background Administrative health care databases offer an efficient and accessible, though as-yet unvalidated, approach to studying outcomes of patients with chronic kidney disease and end-stage renal disease (ESRD. The objective of this study is to determine the validity of outpatient physician billing derived algorithms for defining chronic dialysis compared to a reference standard ESRD registry. Methods A cohort of incident dialysis patients (Jan. 1 - Dec. 31, 2008 and prevalent chronic dialysis patients (Jan 1, 2008 was selected from a geographically inclusive ESRD registry and administrative database. Four administrative data definitions were considered: at least 1 outpatient claim, at least 2 outpatient claims, at least 2 outpatient claims at least 90 days apart, and continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. Measures of agreement of the four administrative data definitions were compared to a reference standard (ESRD registry. Basic patient characteristics are compared between all 5 patient groups. Results 1,118,097 individuals formed the overall population and 2,227 chronic dialysis patients were included in the ESRD registry. The three definitions requiring at least 2 outpatient claims resulted in kappa statistics between 0.60-0.80 indicating "substantial" agreement. "At least 1 outpatient claim" resulted in "excellent" agreement with a kappa statistic of 0.81. Conclusions Of the four definitions, the simplest (at least 1 outpatient claim performed comparatively to other definitions. The limitations of this work are the billing codes used are developed in Canada, however, other countries use similar billing practices and thus the codes could easily be mapped to other systems. Our reference standard ESRD registry may not capture all dialysis patients resulting in some misclassification. The registry is linked to on-going care so this is likely to be minimal. The definition

  16. Osteoporosis-related fracture case definitions for population-based administrative data

    Directory of Open Access Journals (Sweden)

    Lix Lisa M

    2012-05-01

    Full Text Available Abstract Background Population-based administrative data have been used to study osteoporosis-related fracture risk factors and outcomes, but there has been limited research about the validity of these data for ascertaining fracture cases. The objectives of this study were to: (a compare fracture incidence estimates from administrative data with estimates from population-based clinically-validated data, and (b test for differences in incidence estimates from multiple administrative data case definitions. Methods Thirty-five case definitions for incident fractures of the hip, wrist, humerus, and clinical vertebrae were constructed using diagnosis codes in hospital data and diagnosis and service codes in physician billing data from Manitoba, Canada. Clinically-validated fractures were identified from the Canadian Multicentre Osteoporosis Study (CaMos. Generalized linear models were used to test for differences in incidence estimates. Results For hip fracture, sex-specific differences were observed in the magnitude of under- and over-ascertainment of administrative data case definitions when compared with CaMos data. The length of the fracture-free period to ascertain incident cases had a variable effect on over-ascertainment across fracture sites, as did the use of imaging, fixation, or repair service codes. Case definitions based on hospital data resulted in under-ascertainment of incident clinical vertebral fractures. There were no significant differences in trend estimates for wrist, humerus, and clinical vertebral case definitions. Conclusions The validity of administrative data for estimating fracture incidence depends on the site and features of the case definition.

  17. Development and Validation of Case-Finding Algorithms for the Identification of Patients with ANCA-Associated Vasculitis in Large Healthcare Administrative Databases

    Science.gov (United States)

    Sreih, Antoine G.; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D.; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A.

    2016-01-01

    Purpose To develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener’s, GPA), microscopic polyangiitis (MPA), and eosinophilic granulomatosis with polyangiitis (Churg-Strauss, EGPA). Methods 250 patients per disease were randomly selected from 2 large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). 16 case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody (ANCA) type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. Results An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the following diagnoses: alveolar hemorrhage, interstitial lung disease, glomerulonephritis, acute or chronic kidney disease, the encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the ANCA type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA respectively. Conclusion Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. PMID:27804171

  18. Fatigue after stroke: the development and evaluation of a case definition.

    Science.gov (United States)

    Lynch, Joanna; Mead, Gillian; Greig, Carolyn; Young, Archie; Lewis, Susan; Sharpe, Michael

    2007-11-01

    While fatigue after stroke is a common problem, it has no generally accepted definition. Our aim was to develop a case definition for post-stroke fatigue and to test its psychometric properties. A case definition with face validity and an associated structured interview was constructed. After initial piloting, the feasibility, reliability (test-retest and inter-rater) and concurrent validity (in relation to four fatigue severity scales) were determined in 55 patients with stroke. All participating patients provided satisfactory answers to all the case definition probe questions demonstrating its feasibility For test-retest reliability, kappa was 0.78 (95% CI, 0.57-0.94, Pdefinition also had substantially higher fatigue scores on four fatigue severity scales (Pvalidity. The proposed case definition is feasible to administer and reliable in practice, and there is evidence of concurrent validity. It requires further evaluation in different settings.

  19. Application and validation of case-finding algorithms for identifying individuals with human immunodeficiency virus from administrative data in British Columbia, Canada.

    Directory of Open Access Journals (Sweden)

    Bohdan Nosyk

    Full Text Available To define a population-level cohort of individuals infected with the human immunodeficiency virus (HIV in the province of British Columbia from available registries and administrative datasets using a validated case-finding algorithm.Individuals were identified for possible cohort inclusion from the BC Centre for Excellence in HIV/AIDS (CfE drug treatment program (antiretroviral therapy and laboratory testing datasets (plasma viral load (pVL and CD4 diagnostic test results, the BC Centre for Disease Control (CDC provincial HIV surveillance database (positive HIV tests, as well as databases held by the BC Ministry of Health (MoH; the Discharge Abstract Database (hospitalizations, the Medical Services Plan (physician billing and PharmaNet databases (additional HIV-related medications. A validated case-finding algorithm was applied to distinguish true HIV cases from those likely to have been misclassified. The sensitivity of the algorithms was assessed as the proportion of confirmed cases (those with records in the CfE, CDC and MoH databases positively identified by each algorithm. A priori hypotheses were generated and tested to verify excluded cases.A total of 25,673 individuals were identified as having at least one HIV-related health record. Among 9,454 unconfirmed cases, the selected case-finding algorithm identified 849 individuals believed to be HIV-positive. The sensitivity of this algorithm among confirmed cases was 88%. Those excluded from the cohort were more likely to be female (44.4% vs. 22.5%; p<0.01, had a lower mortality rate (2.18 per 100 person years (100PY vs. 3.14/100PY; p<0.01, and had lower median rates of health service utilization (days of medications dispensed: 9745/100PY vs. 10266/100PY; p<0.01; days of inpatient care: 29/100PY vs. 98/100PY; p<0.01; physician billings: 602/100PY vs. 2,056/100PY; p<0.01.The application of validated case-finding algorithms and subsequent hypothesis testing provided a strong framework for

  20. Does expert opinion match the operational definition of the Lupus Low Disease Activity State (LLDAS)? A case-based construct validity study.

    Science.gov (United States)

    Golder, Vera; Huq, Molla; Franklyn, Kate; Calderone, Alicia; Lateef, Aisha; Lau, Chak Sing; Lee, Alfred Lok Hang; Navarra, Sandra Teresa V; Godfrey, Timothy; Oon, Shereen; Hoi, Alberta Yik Bun; Morand, Eric Francis; Nikpour, Mandana

    2017-06-01

    To evaluate the construct validity of the Lupus Low Disease Activity State (LLDAS), a treatment target in systemic lupus erythematosus (SLE). Fifty SLE case summaries based on real patients were prepared and assessed independently for meeting the operational definition of LLDAS. Fifty international rheumatologists with expertise in SLE, but with no prior involvement in the LLDAS project, responded to a survey in which they were asked to categorize the disease activity state of each case as remission, low, moderate, or high. Agreement between expert opinion and LLDAS was assessed using Cohen's kappa. Overall agreement between expert opinion and the operational definition of LLDAS was 77.96% (95% CI: 76.34-79.58%), with a Cohen's kappa of 0.57 (95% CI: 0.55-0.61). Of the cases (22 of 50) that fulfilled the operational definition of LLDAS, only 5.34% (59 of 22 × 50) of responses classified the cases as moderate/high activity. Of the cases that did not fulfill the operational definition of LLDAS (28 of 50), 35.14% (492 of 28 × 50) of responses classified the cases as remission/low activity. Common reasons for discordance were assignment to remission/low activity of cases with higher corticosteroid doses than defined in LLDAS (prednisolone ≤ 7.5mg) or with SLEDAI-2K >4 due to serological activity (high anti-dsDNA antibody and/or low complement). LLDAS has good construct validity with high overall agreement between the operational definition of LLDAS and expert opinion. Discordance of results suggests that the operational definition of LLDAS is more stringent than expert opinion at defining a low disease activity state. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.

    Science.gov (United States)

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2016-02-01

    Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.

  2. Systematic review of validated case definitions for diabetes in ICD-9-coded and ICD-10-coded data in adult populations.

    Science.gov (United States)

    Khokhar, Bushra; Jette, Nathalie; Metcalfe, Amy; Cunningham, Ceara Tess; Quan, Hude; Kaplan, Gilaad G; Butalia, Sonia; Rabi, Doreen

    2016-08-05

    With steady increases in 'big data' and data analytics over the past two decades, administrative health databases have become more accessible and are now used regularly for diabetes surveillance. The objective of this study is to systematically review validated International Classification of Diseases (ICD)-based case definitions for diabetes in the adult population. Electronic databases, MEDLINE and Embase, were searched for validation studies where an administrative case definition (using ICD codes) for diabetes in adults was validated against a reference and statistical measures of the performance reported. The search yielded 2895 abstracts, and of the 193 potentially relevant studies, 16 met criteria. Diabetes definition for adults varied by data source, including physician claims (sensitivity ranged from 26.9% to 97%, specificity ranged from 94.3% to 99.4%, positive predictive value (PPV) ranged from 71.4% to 96.2%, negative predictive value (NPV) ranged from 95% to 99.6% and κ ranged from 0.8 to 0.9), hospital discharge data (sensitivity ranged from 59.1% to 92.6%, specificity ranged from 95.5% to 99%, PPV ranged from 62.5% to 96%, NPV ranged from 90.8% to 99% and κ ranged from 0.6 to 0.9) and a combination of both (sensitivity ranged from 57% to 95.6%, specificity ranged from 88% to 98.5%, PPV ranged from 54% to 80%, NPV ranged from 98% to 99.6% and κ ranged from 0.7 to 0.8). Overall, administrative health databases are useful for undertaking diabetes surveillance, but an awareness of the variation in performance being affected by case definition is essential. The performance characteristics of these case definitions depend on the variations in the definition of primary diagnosis in ICD-coded discharge data and/or the methodology adopted by the healthcare facility to extract information from patient records. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Evaluation of surveillance case definition in the diagnosis of leptospirosis, using the Microscopic Agglutination Test: a validation study.

    Science.gov (United States)

    Dassanayake, Dinesh L B; Wimalaratna, Harith; Agampodi, Suneth B; Liyanapathirana, Veranja C; Piyarathna, Thibbotumunuwe A C L; Goonapienuwala, Bimba L

    2009-04-22

    Leptospirosis is endemic in both urban and rural areas of Sri Lanka and there had been many out breaks in the recent past. This study was aimed at validating the leptospirosis surveillance case definition, using the Microscopic Agglutination Test (MAT). The study population consisted of patients with undiagnosed acute febrile illness who were admitted to the medical wards of the Teaching Hospital Kandy, from 1st July 2007 to 31st July 2008. The subjects were screened to diagnose leptospirosis according to the leptospirosis case definition. MAT was performed on blood samples taken from each patient on the 7th day of fever. Leptospirosis case definition was evaluated in regard to sensitivity, specificity and predictive values, using a MAT titre >or= 1:800 for confirming leptospirosis. A total of 123 patients were initially recruited of which 73 had clinical features compatible with the surveillance case definition. Out of the 73 only 57 had a positive MAT result (true positives) leaving 16 as false positives. Out of the 50 who didn't have clinical features compatible with the case definition 45 had a negative MAT as well (true negatives), therefore 5 were false negatives. Total number of MAT positives was 62 out of 123. According to these results the test sensitivity was 91.94%, specificity 73.77%, positive predictive value and negative predictive values were 78.08% and 90% respectively. Diagnostic accuracy of the test was 82.93%. This study confirms that the surveillance case definition has a very high sensitivity and negative predictive value with an average specificity in diagnosing leptospirosis, based on a MAT titre of >or= 1: 800.

  4. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  5. Clinical case definition for the diagnosis of acute intussusception.

    Science.gov (United States)

    Bines, Julie E; Ivanoff, Bernard; Justice, Frances; Mulholland, Kim

    2004-11-01

    Because of the reported association between intussusception and a rotavirus vaccine, future clinical trials of rotavirus vaccines will need to include intussusception surveillance in the evaluation of vaccine safety. The aim of this study is to develop and validate a clinical case definition for the diagnosis of acute intussusception. A clinical case definition for the diagnosis of acute intussusception was developed by analysis of an extensive literature review that defined the clinical presentation of intussusception in 70 developed and developing countries. The clinical case definition was then assessed for sensitivity and specificity using a retrospective chart review of hospital admissions. Sensitivity of the clinical case definition was assessed in children diagnosed with intussusception over a 6.5-year period. Specificity was assessed in patients aged clinical case definition accurately identified 185 of 191 assessable cases as "probable" intussusception and six cases as "possible" intussusception (sensitivity, 97%). No case of radiologic or surgically proven intussusception failed to be identified by the clinical case definition. The specificity of the definition in correctly identifying patients who did not have intussusception ranged from 87% to 91%. The clinical case definition for intussusception may assist in the prompt identification of patients with intussusception and may provide an important tool for the future trials of enteric vaccines.

  6. Validity of calendar day-based definitions for community-onset bloodstream infections.

    Science.gov (United States)

    Laupland, Kevin B; Gregson, Daniel B; Church, Deirdre L

    2015-04-02

    Community-onset (CO) bloodstream infections (BSI) are those BSI where the blood culture is drawn culture draw or hospital admission are not always available. We evaluated the validity of using 2- or 3- calendar day based definitions for CO-BSI by comparing to a "gold standard" 48-hour definition. Among the population-based cohort of 14,106 episodes of BSI studied, 10,543 were classified as CO based on "gold standard" 48-hour criteria. When 2-day and 3-day definitions were applied, 10,396 and 10,707 CO-BSI episodes were ascertained, respectively. All but 147 (1.4%) true CO-BSI cases were included by using the 2-day definition. When the 3-day definition was applied, all cases of CO-BSI were identified but and additional 164 (1.5%) cases of hospital-onset HO-BSI were also included. Thus the sensitivity and specificity of the 2-day definition was 98.6% and 100% and for the 3-day definition was 100% and 98.5%, respectively. Overall, only 311 (2.2%) cases were potentially miss-classifiable using either the 2- or 3-calendar day based definitions. Use of either a 2- or 3-day definition is highly accurate for classifying CO-BSI.

  7. Analysis of risk factors for schizophrenia with two different case definitions

    DEFF Research Database (Denmark)

    Sørensen, Holger J; Tidselbak Larsen, Janne; Mors, Ole

    2015-01-01

    Different case definitions of schizophrenia have been used in register based research. However, no previous study has externally validated two different case definitions of schizophrenia against a wide range of risk factors for schizophrenia. We investigated hazard ratios (HRs) for a wide range...... of risk factors for ICD-10 DCR schizophrenia using a nationwide Danish sample of 2,772,144 residents born in 1955-1997. We compared one contact only (OCO) (the case definition of schizophrenia used in Danish register based studies) with two or more contacts (TMC) (a case definition of at least 2 inpatient...... contacts with schizophrenia). During the follow-up, the OCO definition included 15,074 and the TMC 7562 cases; i.e. half as many. The TMC case definition appeared to select for a worse illness course. A wide range of risk factors were uniformly associated with both case definitions and only slightly higher...

  8. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  9. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  10. Validation of asthma recording in electronic health records: a systematic review

    Directory of Open Access Journals (Sweden)

    Nissen F

    2017-12-01

    Full Text Available Francis Nissen,1 Jennifer K Quint,2 Samantha Wilkinson,1 Hana Mullerova,3 Liam Smeeth,1 Ian J Douglas1 1Department of Non-Communicable Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK; 2National Heart and Lung Institute, Imperial College, London, UK; 3RWD & Epidemiology, GSK R&D, Uxbridge, UK Objective: To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background: Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research.Methods: We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV] were summarized in two tables.Results: Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%. Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion: Attaining high PPVs (>80% is possible using each of the discussed validation

  11. Validation of a published case definition for tuberculosis-associated immune reconstitution inflammatory syndrome.

    Science.gov (United States)

    Haddow, Lewis J; Moosa, Mahomed-Yunus S; Easterbrook, Philippa J

    2010-01-02

    To evaluate the International Network for the Study of HIV-associated IRIS (INSHI) case definitions for tuberculosis (TB)-associated immune reconstitution inflammatory syndrome (IRIS) in a South African cohort. Prospective cohort of 498 adult HIV-infected patients initiating antiretroviral therapy. Patients were followed up for 24 weeks and all clinical events were recorded. Events with TB-IRIS as possible cause were assessed by consensus expert opinion and INSHI case definition. Positive, negative, and chance-corrected agreement (kappa) were calculated, and reasons for disagreement were assessed. One hundred and two (20%) patients were receiving TB therapy at antiretroviral therapy initiation. Three hundred and thirty-three events were evaluated (74 potential paradoxical IRIS, 259 potential unmasking IRIS). Based on expert opinion, there were 18 cases of paradoxical IRIS associated with TB and/or other opportunistic disease. The INSHI criteria for TB-IRIS agreed in 13 paradoxical cases, giving positive agreement of 72.2%, negative agreement in 52/56 non-TB-IRIS events (92.9%), and kappa of 0.66. There were 19 unmasking TB-IRIS cases based on expert opinion, of which 12 were considered IRIS using the INSHI definition (positive agreement 63.2%). There was agreement in all 240 non-TB-IRIS events (negative agreement 100%) and kappa was 0.76. There was good agreement between the INSHI case definition for both paradoxical and unmasking TB-IRIS and consensus expert opinion. These results support the use of this definition in clinical and research practice, with minor caveats in its application.

  12. Evaluation of a surveillance case definition for anogenital warts, Kaiser Permanente northwest.

    Science.gov (United States)

    Naleway, Allison L; Weinmann, Sheila; Crane, Brad; Gee, Julianne; Markowitz, Lauri E; Dunne, Eileen F

    2014-08-01

    Most studies of anogenital wart (AGW) epidemiology have used large clinical or administrative databases and unconfirmed case definitions based on combinations of diagnosis and procedure codes. We developed and validated an AGW case definition using a combination of diagnosis codes and other information available in the electronic medical record (provider type, laboratory testing). We calculated the positive predictive value (PPV) of this case definition compared with manual medical record review in a random sample of 250 cases. Using this case definition, we calculated the annual age- and sex-stratified prevalence of AGW among individuals 11 through 30 years of age from 2000 through 2005. We identified 2730 individuals who met the case definition. The PPV of the case definition was 82%, and the average annual prevalence was 4.16 per 1000. Prevalence of AGW was higher in females compared with males in every age group, with the exception of the 27- to 30-year-olds. Among females, prevalence peaked in the 19- to 22-year-olds, and among males, the peak was observed in 23- to 26-year-olds. The case definition developed in this study is the first to be validated with medical record review and has a good PPV for the detection of AGW. The prevalence rates observed in this study were higher than other published rates, but the age- and sex-specific patterns observed were consistent with previous reports.

  13. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits

    Directory of Open Access Journals (Sweden)

    Lieberman Rebecca M

    2008-04-01

    Full Text Available Abstract Background Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. Methods This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3. We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. Results We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64% cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8, often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2 identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2% true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86–92 for

  14. Validating Machine Learning Algorithms for Twitter Data Against Established Measures of Suicidality.

    Science.gov (United States)

    Braithwaite, Scott R; Giraud-Carrier, Christophe; West, Josh; Barnes, Michael D; Hanson, Carl Lee

    2016-05-16

    One of the leading causes of death in the United States (US) is suicide and new methods of assessment are needed to track its risk in real time. Our objective is to validate the use of machine learning algorithms for Twitter data against empirically validated measures of suicidality in the US population. Using a machine learning algorithm, the Twitter feeds of 135 Mechanical Turk (MTurk) participants were compared with validated, self-report measures of suicide risk. Our findings show that people who are at high suicidal risk can be easily differentiated from those who are not by machine learning algorithms, which accurately identify the clinically significant suicidal rate in 92% of cases (sensitivity: 53%, specificity: 97%, positive predictive value: 75%, negative predictive value: 93%). Machine learning algorithms are efficient in differentiating people who are at a suicidal risk from those who are not. Evidence for suicidality can be measured in nonclinical populations using social media data.

  15. Brief Report: Validation of a Definition of Flare in Patients With Established Gout.

    Science.gov (United States)

    Gaffo, Angelo L; Dalbeth, Nicola; Saag, Kenneth G; Singh, Jasvinder A; Rahn, Elizabeth J; Mudano, Amy S; Chen, Yi-Hsing; Lin, Ching-Tsai; Bourke, Sandra; Louthrenoo, Worawit; Vazquez-Mellado, Janitzia; Hernández-Llinas, Hansel; Neogi, Tuhina; Vargas-Santos, Ana Beatriz; da Rocha Castelar-Pinheiro, Geraldo; Amorim, Rodrigo B C; Uhlig, Till; Hammer, Hilde B; Eliseev, Maxim; Perez-Ruiz, Fernando; Cavagna, Lorenzo; McCarthy, Geraldine M; Stamp, Lisa K; Gerritsen, Martijn; Fana, Viktoria; Sivera, Francisca; Taylor, William

    2018-03-01

    To perform external validation of a provisional definition of disease flare in patients with gout. Five hundred nine patients with gout were enrolled in a cross-sectional study during a routine clinical care visit at 17 international sites. Data were collected to classify patients as experiencing or not experiencing a gout flare, according to a provisional definition. A local expert rheumatologist performed the final independent adjudication of gout flare status. Sensitivity, specificity, predictive values, and receiver operating characteristic (ROC) curves were used to determine the diagnostic performance of gout flare definitions. The mean ± SD age of the patients was 57.5 ± 13.9 years, and 89% were male. The definition requiring fulfillment of at least 3 of 4 criteria (patient-defined gout flare, pain at rest score of >3 on a 0-10-point numerical rating scale, presence of at least 1 swollen joint, and presence of at least 1 warm joint) was 85% sensitive and 95% specific in confirming the presence of a gout flare, with an accuracy of 92%. The ROC area under the curve was 0.97. The definition based on a classification and regression tree algorithm (entry point, pain at rest score >3, followed by patient-defined flare "yes") was 73% sensitive and 96% specific. The definition of gout flare that requires fulfillment of at least 3 of 4 patient-reported criteria is now validated to be sensitive, specific, and accurate for gout flares, as demonstrated using an independent large international patient sample. The availability of a validated gout flare definition will improve the ascertainment of an important clinical outcome in studies of gout. © 2017, American College of Rheumatology.

  16. Performance of an electronic health record-based phenotype algorithm to identify community associated methicillin-resistant Staphylococcus aureus cases and controls for genetic association studies

    Directory of Open Access Journals (Sweden)

    Kathryn L. Jackson

    2016-11-01

    Full Text Available Abstract Background Community associated methicillin-resistant Staphylococcus aureus (CA-MRSA is one of the most common causes of skin and soft tissue infections in the United States, and a variety of genetic host factors are suspected to be risk factors for recurrent infection. Based on the CDC definition, we have developed and validated an electronic health record (EHR based CA-MRSA phenotype algorithm utilizing both structured and unstructured data. Methods The algorithm was validated at three eMERGE consortium sites, and positive predictive value, negative predictive value and sensitivity, were calculated. The algorithm was then run and data collected across seven total sites. The resulting data was used in GWAS analysis. Results Across seven sites, the CA-MRSA phenotype algorithm identified a total of 349 cases and 7761 controls among the genotyped European and African American biobank populations. PPV ranged from 68 to 100% for cases and 96 to 100% for controls; sensitivity ranged from 94 to 100% for cases and 75 to 100% for controls. Frequency of cases in the populations varied widely by site. There were no plausible GWAS-significant (p < 5 E −8 findings. Conclusions Differences in EHR data representation and screening patterns across sites may have affected identification of cases and controls and accounted for varying frequencies across sites. Future work identifying these patterns is necessary.

  17. Case definition for progressive multifocal leukoencephalopathy following treatment with monoclonal antibodies.

    Science.gov (United States)

    Mentzer, Dirk; Prestel, Jürgen; Adams, Ortwin; Gold, Ralf; Hartung, Hans-Peter; Hengel, Hartmut; Kieseier, Bernd C; Ludwig, Wolf-Dieter; Keller-Stanislawski, Brigitte

    2012-09-01

    Novel immunosuppressive/modulating therapies with monoclonal antibodies (MABs) have been associated with progressive multifocal leukoencephalopathy (PML), a potentially fatal disease of the brain caused by the JC virus. Taking the complex diagnostic testing and heterogeneous clinical presentation of PML into account, an agreed case definition for PML is a prerequisite for a thorough assessment of PML. A working group was established to develop a standardised case definition for PML which permits data comparability across clinical trials, postauthorisation safety studies and passive postmarketing surveillance. The case definition is designed to define levels of diagnostic certainty of reported PML cases following treatment with MABs. It was subsequently used to categorise retrospectively suspected PML cases from Germany reported to the Paul-Ehrlich-Institute as the responsible national competent authority. The algorithm of the case definition is based on clinical symptoms, PCR for JC virus DNA in cerebrospinal fluid, brain MRI, and brain biopsy/autopsy. The case definition was applied to 119 suspected cases of PML following treatment with MABs and is considered to be helpful for case ascertainment of suspected PML cases for various MABs covering a broad spectrum of indications. Even if the available information is not yet complete, the case definition provides a level of diagnostic certainty. The proposed case definition permits data comparability among different medicinal products and among active as well as passive surveillance settings. It may form a basis for meaningful risk analysis and communication for regulators and healthcare professionals.

  18. An algorithm of computing inhomogeneous differential equations for definite integrals

    OpenAIRE

    Nakayama, Hiromasa; Nishiyama, Kenta

    2010-01-01

    We give an algorithm to compute inhomogeneous differential equations for definite integrals with parameters. The algorithm is based on the integration algorithm for $D$-modules by Oaku. Main tool in the algorithm is the Gr\\"obner basis method in the ring of differential operators.

  19. Medical chart validation of an algorithm for identifying multiple sclerosis relapse in healthcare claims.

    Science.gov (United States)

    Chastek, Benjamin J; Oleen-Burkey, Merrikay; Lopez-Bresnahan, Maria V

    2010-01-01

    Relapse is a common measure of disease activity in relapsing-remitting multiple sclerosis (MS). The objective of this study was to test the content validity of an operational algorithm for detecting relapse in claims data. A claims-based relapse detection algorithm was tested by comparing its detection rate over a 1-year period with relapses identified based on medical chart review. According to the algorithm, MS patients in a US healthcare claims database who had either (1) a primary claim for MS during hospitalization or (2) a corticosteroid claim following a MS-related outpatient visit were designated as having a relapse. Patient charts were examined for explicit indication of relapse or care suggestive of relapse. Positive and negative predictive values were calculated. Medical charts were reviewed for 300 MS patients, half of whom had a relapse according to the algorithm. The claims-based criteria correctly classified 67.3% of patients with relapses (positive predictive value) and 70.0% of patients without relapses (negative predictive value; kappa 0.373: p value of the operational algorithm. Limitations of the algorithm include lack of differentiation between relapsing-remitting MS and other types, and that it does not incorporate measures of function and disability. The claims-based algorithm appeared to successfully detect moderate-to-severe MS relapse. This validated definition can be applied to future claims-based MS studies.

  20. Revised surveillance case definition for HIV infection--United States, 2014.

    Science.gov (United States)

    2014-04-11

    Following extensive consultation and peer review, CDC and the Council of State and Territorial Epidemiologists have revised and combined the surveillance case definitions for human immunodeficiency virus (HIV) infection into a single case definition for persons of all ages (i.e., adults and adolescents aged ≥13 years and children aged case now accommodate new multitest algorithms, including criteria for differentiating between HIV-1 and HIV-2 infection and for recognizing early HIV infection. A confirmed case can be classified in one of five HIV infection stages (0, 1, 2, 3, or unknown); early infection, recognized by a negative HIV test within 6 months of HIV diagnosis, is classified as stage 0, and acquired immunodeficiency syndrome (AIDS) is classified as stage 3. Criteria for stage 3 have been simplified by eliminating the need to differentiate between definitive and presumptive diagnoses of opportunistic illnesses. Clinical (nonlaboratory) criteria for defining a case for surveillance purposes have been made more practical by eliminating the requirement for information about laboratory tests. The surveillance case definition is intended primarily for monitoring the HIV infection burden and planning for prevention and care on a population level, not as a basis for clinical decisions for individual patients. CDC and the Council of State and Territorial Epidemiologists recommend that all states and territories conduct case surveillance of HIV infection using this revised surveillance case definition.

  1. Development of a validated clinical case definition of generalized tonic-clonic seizures for use by community-based health care providers.

    Science.gov (United States)

    Anand, Krishnan; Jain, Satish; Paul, Eldho; Srivastava, Achal; Sahariah, Sirazul A; Kapoor, Suresh K

    2005-05-01

    To develop and test a clinical case definition for identification of generalized tonic-clonic seizures (GTCSs) by community-based health care providers. To identify symptoms that can help identify GTCSs, patients with history of a jerky movements or rigidity in any part of the body ever in life were recruited from three sites: the community, secondary care hospital, and tertiary care hospital. These patients were administered a 14-item structured interview schedule focusing on the circumstances surrounding the seizure. Subsequently, a neurologist examined each patient and, based on available investigations, classified them as GTCS or non-GTCS cases. A logistic regression analysis was performed to select symptoms that were to be used for case definition of GTCSs. Validity parameters for the case definition at different cutoff points were calculated in another set of subjects. In total, 339 patients were enrolled in the first phase of the study. The tertiary care hospital contributed the maximal number of GTCS cases, whereas cases of non-GTCS were mainly from the community. At the end of phase I, the questionnaire was shortened from 14 to eight questions based on statistical association and clinical judgment. After phase II, which was conducted among 170 subjects, three variables were found to be significantly related to the presence of GTCSs by logistic regression: absence of stress (13.1; 4.1-41.3), presence of frothing (13.7; 4.0-47.3), and occurrence in sleep (8.3; 2.0-34.9). As a case definition using only three variables did not provide sufficient specificity, three more variables were added based on univariate analysis of the data (incontinence during the episode and unconsciousness) and review of literature (injury during episode). A case definition consisting of giving one point to an affirmative answer for each of the six questions was tested. At a cutoff point of four, sensitivity was 56.9 (47.4-66.0) and specificity, 96.3 (86.2-99.4). Among the 197 GTCS

  2. Validation of a knowledge-based boundary detection algorithm: a multicenter study

    International Nuclear Information System (INIS)

    Groch, M.W.; Erwin, W.D.; Murphy, P.H.; Ali, A.; Moore, W.; Ford, P.; Qian Jianzhong; Barnett, C.A.; Lette, J.

    1996-01-01

    A completely operator-independent boundary detection algorithm for multigated blood pool (MGBP) studies has been evaluated at four medical centers. The knowledge-based boundary detector (KBBD) algorithm is nondeterministic, utilizing a priori domain knowledge in the form of rule sets for the localization of cardiac chambers and image features, providing a case-by-case method for the identification and boundary definition of the left ventricle (LV). The nondeterministic algorithm employs multiple processing pathways, where KBBD rules have been designed for conventional (CONV) imaging geometries (nominal 45 LAO, nonzoom) as well as for highly zoomed and/or caudally tilted (ZOOM) studies. The resultant ejection fractions (LVEF) from the KBBD program have been compared with the standard LVEF calculations in 253 total cases in four institutions, 157 utilizing CONV geometry and 96 utilizing ZOOM geometries. The criteria for success was a KBBD boundary adequately defined over the LV as judged by an experienced observer, and the correlation of KBBD LVEFs to the standard calculation of LVEFs for the institution. The overall success rate for all institutions combined was 99.2%, with an overall correlation coefficient of r=0.95 (P<0.001). The individual success rates and EF correlations (r), for CONV and ZOOM geometers were: 98%, r=0.93 (CONV) and 100%, r=0.95 (ZOOM). The KBBD algorithm can be adapted to varying clinical situations, employing automatic processing using artificial intelligence, with performance close to that of a human operator. (orig.)

  3. Revised definition of neuropathic pain and its grading system: an open case series illustrating its use in clinical practice.

    Science.gov (United States)

    Geber, Christian; Baumgärtner, Ulf; Schwab, Rainer; Müller, Harald; Stoeter, Peter; Dieterich, Marianne; Sommer, Clemens; Birklein, Frank; Treede, Rolf-Detlef

    2009-10-01

    The definition of neuropathic pain has recently been revised by an expert committee of the Neuropathic Pain Special Interest Group of the International Association for the Study of Pain (NeuPSIG) as "pain arising as direct consequence of a lesion or disease affecting the somatosensory system," and a grading system of "definite," "probable," and "possible" neuropathic pain has been introduced. This open case series of 5 outpatients (3 men, 2 women; mean age 48 +/- 12 years) demonstrates how the grading system can be applied, in combination with appropriate confirmatory testing, to diagnosis neuropathic conditions in clinical practice. The proposed grading system includes a dynamic algorithm that enhances the physician's ability to determine with a greater level of certainty whether a pain condition is neuropathic. Its clinical use should be further validated in prospective studies.

  4. Validation of a case definition for leptospirosis diagnosis in patients with acute severe febrile disease admitted in reference hospitals at the State of Pernambuco, Brazil.

    Science.gov (United States)

    Albuquerque Filho, Alfredo Pereira Leite de; Araújo, Jéssica Guido de; Souza, Inacelli Queiroz de; Martins, Luciana Cardoso; Oliveira, Marta Iglis de; Silva, Maria Jesuíta Bezerra da; Montarroyos, Ulisses Ramos; Miranda Filho, Demócrito de Barros

    2011-01-01

    Leptospirosis is often mistaken for other acute febrile illnesses because of its nonspecific presentation. Bacteriologic, serologic, and molecular methods have several limitations for early diagnosis: technical complexity, low availability, low sensitivity in early disease, or high cost. This study aimed to validate a case definition, based on simple clinical and laboratory tests, that is intended for bedside diagnosis of leptospirosis among hospitalized patients. Adult patients, admitted to two reference hospitals in Recife, Brazil, with a febrile illness of less than 21 days and with a clinical suspicion of leptospirosis, were included to test a case definition comprising ten clinical and laboratory criteria. Leptospirosis was confirmed or excluded by a composite reference standard (microscopic agglutination test, ELISA, and blood culture). Test properties were determined for each cutoff number of the criteria from the case definition. Ninety seven patients were included; 75 had confirmed leptospirosis and 22 did not. Mean number of criteria from the case definition that were fulfilled was 7.8±1.2 for confirmed leptospirosis and 5.9±1.5 for non-leptospirosis patients (pcase definition, for a cutoff of at least 7 criteria, reached average sensitivity and specificity, but with a high positive predictive value. Its simplicity and low cost make it useful for rapid bedside leptospirosis diagnosis in Brazilian hospitalized patients with acute severe febrile disease.

  5. Empirical Derivation and Validation of a Clinical Case Definition for Neuropsychological Impairment in Children and Adolescents.

    Science.gov (United States)

    Beauchamp, Miriam H; Brooks, Brian L; Barrowman, Nick; Aglipay, Mary; Keightley, Michelle; Anderson, Peter; Yeates, Keith O; Osmond, Martin H; Zemek, Roger

    2015-09-01

    Neuropsychological assessment aims to identify individual performance profiles in multiple domains of cognitive functioning; however, substantial variation exists in how deficits are defined and what cutoffs are used, and there is no universally accepted definition of neuropsychological impairment. The aim of this study was to derive and validate a clinical case definition rule to identify neuropsychological impairment in children and adolescents. An existing normative pediatric sample was used to calculate base rates of abnormal functioning on eight measures covering six domains of neuropsychological functioning. The dataset was analyzed by varying the range of cutoff levels [1, 1.5, and 2 standard deviations (SDs) below the mean] and number of indicators of impairment. The derived rule was evaluated by bootstrap, internal and external clinical validation (orthopedic and traumatic brain injury). Our neuropsychological impairment (NPI) rule was defined as "two or more test scores that fall 1.5 SDs below the mean." The rule identifies 5.1% of the total sample as impaired in the assessment battery and consistently targets between 3 and 7% of the population as impaired even when age, domains, and number of tests are varied. The NPI rate increases in groups known to exhibit cognitive deficits. The NPI rule provides a psychometrically derived method for interpreting performance across multiple tests and may be used in children 6-18 years. The rule may be useful to clinicians and scientists who wish to establish whether specific individuals or clinical populations present within expected norms versus impaired function across a battery of neuropsychological tests.

  6. Validation of clinical case definition of acute intussusception in infants in Viet Nam and Australia.

    OpenAIRE

    Bines, JE; Liem, NT; Justice, F; Son, TN; Carlin, JB; de Campo, M; Jamsen, K; Mulholland, K; Barnett, P; Barnes, GL

    2006-01-01

    OBJECTIVE: To test the sensitivity and specificity of a clinical case definition of acute intussusception in infants to assist health-care workers in settings where diagnostic facilities are not available. METHODS: Prospective studies were conducted at a major paediatric hospital in Viet Nam (the National Hospital of Pediatrics, Hanoi) from November 2002 to December 2003 and in Australia (the Royal Children's Hospital, Melbourne) from March 2002 to March 2004 using a clinical case definition ...

  7. Validation of a clinical practice-based algorithm for the diagnosis of autosomal recessive cerebellar ataxias based on NGS identified cases.

    Science.gov (United States)

    Mallaret, Martial; Renaud, Mathilde; Redin, Claire; Drouot, Nathalie; Muller, Jean; Severac, Francois; Mandel, Jean Louis; Hamza, Wahiba; Benhassine, Traki; Ali-Pacha, Lamia; Tazir, Meriem; Durr, Alexandra; Monin, Marie-Lorraine; Mignot, Cyril; Charles, Perrine; Van Maldergem, Lionel; Chamard, Ludivine; Thauvin-Robinet, Christel; Laugel, Vincent; Burglen, Lydie; Calvas, Patrick; Fleury, Marie-Céline; Tranchant, Christine; Anheim, Mathieu; Koenig, Michel

    2016-07-01

    Establishing a molecular diagnosis of autosomal recessive cerebellar ataxias (ARCA) is challenging due to phenotype and genotype heterogeneity. We report the validation of a previously published clinical practice-based algorithm to diagnose ARCA. Two assessors performed a blind analysis to determine the most probable mutated gene based on comprehensive clinical and paraclinical data, without knowing the molecular diagnosis of 23 patients diagnosed by targeted capture of 57 ataxia genes and high-throughput sequencing coming from a 145 patients series. The correct gene was predicted in 61 and 78 % of the cases by the two assessors, respectively. There was a high inter-rater agreement [K = 0.85 (0.55-0.98) p < 0.001] confirming the algorithm's reproducibility. Phenotyping patients with proper clinical examination, imaging, biochemical investigations and nerve conduction studies remain crucial for the guidance of molecular analysis and to interpret next generation sequencing results. The proposed algorithm should be helpful for diagnosing ARCA in clinical practice.

  8. The Watershed Transform : Definitions, Algorithms and Parallelization Strategies

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.; Meijster, Arnold

    2000-01-01

    The watershed transform is the method of choice for image segmentation in the field of mathematical morphology. We present a critical review of several definitions of the watershed transform and the associated sequential algorithms, and discuss various issues which often cause confusion in the

  9. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    Science.gov (United States)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  10. Validation of a case definition for leptospirosis diagnosis in patients with acute severe febrile disease admitted in reference hospitals at the State of Pernambuco, Brazil

    Directory of Open Access Journals (Sweden)

    Alfredo Pereira Leite de Albuquerque Filho

    2011-12-01

    Full Text Available INTRODUCTION: Leptospirosis is often mistaken for other acute febrile illnesses because of its nonspecific presentation. Bacteriologic, serologic, and molecular methods have several limitations for early diagnosis: technical complexity, low availability, low sensitivity in early disease, or high cost. This study aimed to validate a case definition, based on simple clinical and laboratory tests, that is intended for bedside diagnosis of leptospirosis among hospitalized patients. METHODS: Adult patients, admitted to two reference hospitals in Recife, Brazil, with a febrile illness of less than 21 days and with a clinical suspicion of leptospirosis, were included to test a case definition comprising ten clinical and laboratory criteria. Leptospirosis was confirmed or excluded by a composite reference standard (microscopic agglutination test, ELISA, and blood culture. Test properties were determined for each cutoff number of the criteria from the case definition. RESULTS: Ninety seven patients were included; 75 had confirmed leptospirosis and 22 did not. Mean number of criteria from the case definition that were fulfilled was 7.8±1.2 for confirmed leptospirosis and 5.9±1.5 for non-leptospirosis patients (p<0.0001. Best sensitivity (85.3% and specificity (68.2% combination was found with a cutoff of 7 or more criteria, reaching positive and negative predictive values of 90.1% and 57.7%, respectively; accuracy was 81.4%. CONCLUSIONS: The case definition, for a cutoff of at least 7 criteria, reached average sensitivity and specificity, but with a high positive predictive value. Its simplicity and low cost make it useful for rapid bedside leptospirosis diagnosis in Brazilian hospitalized patients with acute severe febrile disease.

  11. Case definition terminology for paratuberculosis (Johne's disease).

    Science.gov (United States)

    Whittington, R J; Begg, D J; de Silva, K; Purdie, A C; Dhand, N K; Plain, K M

    2017-11-09

    Paratuberculosis (Johne's disease) is an economically significant condition caused by Mycobacterium avium subsp. paratuberculosis. However, difficulties in diagnosis and classification of individual animals with the condition have hampered research and impeded efforts to halt its progressive spread in the global livestock industry. Descriptive terms applied to individual animals and herds such as exposed, infected, diseased, clinical, sub-clinical, infectious and resistant need to be defined so that they can be incorporated consistently into well-understood and reproducible case definitions. These allow for consistent classification of individuals in a population for the purposes of analysis based on accurate counts. The outputs might include the incidence of cases, frequency distributions of the number of cases by age class or more sophisticated analyses involving statistical comparisons of immune responses in vaccine development studies, or gene frequencies or expression data from cases and controls in genomic investigations. It is necessary to have agreed definitions in order to be able to make valid comparisons and meta-analyses of experiments conducted over time by a given researcher, in different laboratories, by different researchers, and in different countries. In this paper, terms are applied systematically in an hierarchical flow chart to enable classification of individual animals. We propose descriptive terms for different stages in the pathogenesis of paratuberculosis to enable their use in different types of studies and to enable an independent assessment of the extent to which accepted definitions for stages of disease have been applied consistently in any given study. This will assist in the general interpretation of data between studies, and will facilitate future meta-analyses.

  12. An automated database case definition for serious bleeding related to oral anticoagulant use.

    Science.gov (United States)

    Cunningham, Andrew; Stein, C Michael; Chung, Cecilia P; Daugherty, James R; Smalley, Walter E; Ray, Wayne A

    2011-06-01

    Bleeding complications are a serious adverse effect of medications that prevent abnormal blood clotting. To facilitate epidemiologic investigations of bleeding complications, we developed and validated an automated database case definition for bleeding-related hospitalizations. The case definition utilized information from an in-progress retrospective cohort study of warfarin-related bleeding in Tennessee Medicaid enrollees 30 years of age or older. It identified inpatient stays during the study period of January 1990 to December 2005 with diagnoses and/or procedures that indicated a current episode of bleeding. The definition was validated by medical record review for a sample of 236 hospitalizations. We reviewed 186 hospitalizations that had medical records with sufficient information for adjudication. Of these, 165 (89%, 95%CI: 83-92%) were clinically confirmed bleeding-related hospitalizations. An additional 19 hospitalizations (10%, 7-15%) were adjudicated as possibly bleeding-related. Of the 165 clinically confirmed bleeding-related hospitalizations, the automated database and clinical definitions had concordant anatomical sites (gastrointestinal, cerebral, genitourinary, other) for 163 (99%, 96-100%). For those hospitalizations with sufficient information to distinguish between upper/lower gastrointestinal bleeding, the concordance was 89% (76-96%) for upper gastrointestinal sites and 91% (77-97%) for lower gastrointestinal sites. A case definition for bleeding-related hospitalizations suitable for automated databases had a positive predictive value of between 89% and 99% and could distinguish specific bleeding sites. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Investigation of the existence and uniqueness of extremal and positive definite solutions of nonlinear matrix equations

    Directory of Open Access Journals (Sweden)

    Abdel-Shakoor M Sarhan

    2016-05-01

    Full Text Available Abstract We consider two nonlinear matrix equations X r ± ∑ i = 1 m A i ∗ X δ i A i = I $X^{r} \\pm \\sum_{i = 1}^{m} A_{i}^{*}X^{\\delta_{i}}A_{i} = I$ , where − 1 < δ i < 0 $- 1 < \\delta_{i} < 0$ , and r, m are positive integers. For the first equation (plus case, we prove the existence of positive definite solutions and extremal solutions. Two algorithms and proofs of their convergence to the extremal positive definite solutions are constructed. For the second equation (negative case, we prove the existence and the uniqueness of a positive definite solution. Moreover, the algorithm given in (Duan et al. in Linear Algebra Appl. 429:110-121, 2008 (actually, in (Shi et al. in Linear Multilinear Algebra 52:1-15, 2004 for r = 1 $r = 1$ is proved to be valid for any r. Numerical examples are given to illustrate the performance and effectiveness of all the constructed algorithms. In Appendix, we analyze the ordering on the positive cone P ( n ‾ $\\overline{P(n}$ .

  14. SU-E-T-516: Dosimetric Validation of AcurosXB Algorithm in Comparison with AAA & CCC Algorithms for VMAT Technique.

    Science.gov (United States)

    Kathirvel, M; Subramanian, V Sai; Arun, G; Thirumalaiswamy, S; Ramalingam, K; Kumar, S Ashok; Jagadeesh, K

    2012-06-01

    To dosimetrically validate AcurosXB algorithm for Volumetric Modulated Arc Therapy (VMAT) in comparison with standard clinical Anisotropic Analytic Algorithm(AAA) and Collapsed Cone Convolution(CCC) dose calculation algorithms. AcurosXB dose calculation algorithm is available with Varian Eclipse treatment planning system (V10). It uses grid-based Boltzmann equation solver to predict dose precisely in lesser time. This study was made to realize algorithms ability to predict dose accurately as its delivery for which five clinical cases each of Brain, Head&Neck, Thoracic, Pelvic and SBRT were taken. Verification plans were created on multicube phantom with iMatrixx-2D detector array and then dose prediction was done with AcurosXB, AAA & CCC (COMPASS System) algorithm and the same were delivered onto CLINAC-iX treatment machine. Delivered dose was captured in iMatrixx plane for all 25 plans. Measured dose was taken as reference to quantify the agreement between AcurosXB calculation algorithm against previously validated AAA and CCC algorithm. Gamma evaluation was performed with clinical criteria distance-to-agreement 3&2mm and dose difference 3&2% in omnipro-I'MRT software. Plans were evaluated in terms of correlation coefficient, quantitative area gamma and average gamma. Study shows good agreement between mean correlation 0.9979±0.0012, 0.9984±0.0009 & 0.9979±0.0011 for AAA, CCC & Acuros respectively. Mean area gamma for criteria 3mm/3% was found to be 98.80±1.04, 98.14±2.31, 98.08±2.01 and 2mm/2% was found to be 93.94±3.83, 87.17±10.54 & 92.36±5.46 for AAA, CCC & Acuros respectively. Mean average gamma for 3mm/3% was 0.26±0.07, 0.42±0.08, 0.28±0.09 and 2mm/2% was found to be 0.39±0.10, 0.64±0.11, 0.42±0.13 for AAA, CCC & Acuros respectively. This study demonstrated that the AcurosXB algorithm had a good agreement with the AAA & CCC in terms of dose prediction. In conclusion AcurosXB algorithm provides a valid, accurate and speedy alternative to AAA

  15. Validation of MERIS Ocean Color Algorithms in the Mediterranean Sea

    Science.gov (United States)

    Marullo, S.; D'Ortenzio, F.; Ribera D'Alcalà, M.; Ragni, M.; Santoleri, R.; Vellucci, V.; Luttazzi, C.

    2004-05-01

    Satellite ocean color measurements can contribute, better than any other source of data, to quantify the spatial and time variability of ocean productivity and, tanks to the success of several satellite missions starting with CZCS up to SeaWiFS, MODIS and MERIS, it is now possible to start doing the investigation of interannual variations and compare level of production during different decades ([1],[2]). The interannual variability of the ocean productivity at global and regional scale can be correctly measured providing that chlorophyll estimate are based on well calibrated algorithms in order to avoid regional biases and instrumental time shifts. The calibration and validation of Ocean Color data is then one of the most important tasks of several research projects worldwide ([3], [4]). Algorithms developed to retrieve chlorophyll concentration need a specific effort to define the error ranges associated to the estimates. In particular, the empirical algorithms, calculated on regression with in situ data, require independent records to verify the degree of uncertainties associated. In addition several evidences demonstrated that regional algorithms can improve the accuracy of the satellite chlorophyll estimates [5]. In 2002, Santoleri et al. (SIMBIOS) first showed a significant overestimation of the SeaWiFS derived chlorophyll concentration in Mediterranean Sea when the standard global NASA algorithms (OC4v2 and OC4v4) are used. The same authors [6] proposed two preliminary new algorithms for the Mediterranean Sea (L-DORMA and NL-DORMA) on a basis of a bio-optical data set collected in the basin from 1998 to 2000. In 2002 Bricaud et al., [7] analyzing other bio-optical data collected in the Mediterranean, confirmed the overestimation of the chlorophyll concentration in oligotrophic conditions and proposed a new regional algorithm to be used in case of low concentrations. Recently, the number of in situ observations in the basin was increased, permitting a first

  16. Validity of Health Administrative Database Definitions for Hypertension: A Systematic Review.

    Science.gov (United States)

    Pace, Romina; Peters, Tricia; Rahme, Elham; Dasgupta, Kaberi

    2017-08-01

    Health administrative data are frequently used for hypertension surveillance. The aim of this systematic review was to determine the sensitivity and specificity of the commonly used hypertension case definition of 2 physician outpatient claims within a 2-year period or 1 hospital discharge abstract record. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we searched MEDLINE (from 1946) and EMBASE (from 1947) for relevant studies through September 2016 (keywords: "hypertension," "administrative databases," "validation studies"). Data with standardized forms and assessed quality using Quality Assessment of Diagnostic Accuracy Studies criteria were reviewed by 2 reviewers. Pooled sensitivity and specificity were estimated using a generalized linear-model approach to random-effects bivariate regression meta-analysis. The search strategy identified 1732 abstracts, among which 3 articles were deemed relevant. One of the articles incorporated 2 studies with differing reference standards and study populations; thus, we considered each separately. The quality scores of the retained studies ranged from 10-12 of a maximum 14. The sensitivity of the definition investigated to identify hypertension using administrative health databases was 71.2% (95% confidence interval [CI], 68.3-73.7) and the specificity was 94.5% (95% CI, 93.2-95.6) when compared with surveys or medical records. The 2 physician outpatient claims within a 2-year period or 1 hospital discharge abstract record hypertension case definition accurately classifies individuals as hypertensive in approximately 70% of cases and correctly identifies persons as nonhypertensive in approximately 95% of cases. This is likely sufficiently sensitive and specific for most research and surveillance purposes. Copyright © 2017 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  17. Cloud detection algorithm comparison and validation for operational Landsat data products

    Science.gov (United States)

    Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady

    2017-01-01

    Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate

  18. Soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  19. The 10/66 Dementia Research Group's fully operationalised DSM-IV dementia computerized diagnostic algorithm, compared with the 10/66 dementia algorithm and a clinician diagnosis: a population validation study

    Directory of Open Access Journals (Sweden)

    Krishnamoorthy ES

    2008-06-01

    Full Text Available Abstract Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study. Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder.

  20. Validation of minor stroke definitions for thrombolysis decision making.

    Science.gov (United States)

    Park, Tai Hwan; Hong, Keun-Sik; Choi, Jay Chol; Song, Pamela; Lee, Ji Sung; Lee, Juneyoung; Park, Jong-Moo; Kang, Kyusik; Lee, Kyung Bok; Cho, Yong-Jin; Saposnik, Gustavo; Han, Moon-Ku; Bae, Hee-Joon

    2013-05-01

    Patients with low National Institutes of Health Stroke Scale (NIHSS) scores are frequently excluded from thrombolysis, but more than 25% of them remain disabled. We sought to define a validated minor stroke definition to reduce the inappropriate treatment exclusion. From an outcome database, untreated patients with an NIHSS score of 5 or less presenting within a 4.5-hour window were identified and 3-month modified Rankin Scale (mRS) outcomes were analyzed according to individual isolated symptoms and total NIHSS scores. The validity of the following minor stroke definitions were assessed: (1) the National Institute of Neurological Disorders and Stroke Tissue Plasminogen Activator (NINDS-TPA) trials' definition, (2) the total NIHSS score, varying a cutoff point from 0 to 4, and (3) our proposed definition that included an NIHSS score = 0 or an NIHSS score = 1 on the items of level of consciousness (LOC), gaze, facial palsy, sensory, or dysarthria. Of 647 patients, 172 patients (26.6%) had a 3-month unfavorable outcome (mRS score 2-6). Favorable outcome was achieved in more than 80% of patients with an NIHSS score of 1 or less or with an isolated symptom on the LOC, gaze, facial palsy, sensory, or dysarthria item. In contrast, unfavorable outcome proportion was more than 25% in patients with an NIHSS score of 2 or more. When the NINDS-TPA trials' definition, our definition, or the definition of an NIHSS score of 1 or less were applied, more than 75% of patients with an unfavorable outcome were defined as a non-minor stroke and less than 15% of patients with an unfavorable outcome were defined as a minor stroke. Implementation of an optimal definition of minor stroke into thrombolysis decision-making process would decrease the unfavorable outcomes in patients with low NIHSS scores. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  1. Construct validation of an interactive digital algorithm for ostomy care.

    Science.gov (United States)

    Beitz, Janice M; Gerlach, Mary A; Schafer, Vickie

    2014-01-01

    The purpose of this study was to evaluate construct validity for a previously face and content validated Ostomy Algorithm using digital real-life clinical scenarios. A cross-sectional, mixed-methods Web-based survey design study was conducted. Two hundred ninety-seven English-speaking RNs completed the study; participants practiced in both acute care and postacute settings, with 1 expert ostomy nurse (WOC nurse) and 2 nonexpert nurses. Following written consent, respondents answered demographic questions and completed a brief algorithm tutorial. Participants were then presented with 7 ostomy-related digital scenarios consisting of real-life photos and pertinent clinical information. Respondents used the 11 assessment components of the digital algorithm to choose management options. Participant written comments about the scenarios and the research process were collected. The mean overall percentage of correct responses was 84.23%. Mean percentage of correct responses for respondents with a self-reported basic ostomy knowledge was 87.7%; for those with a self-reported intermediate ostomy knowledge was 85.88% and those who were self-reported experts in ostomy care achieved 82.77% correct response rate. Five respondents reported having no prior ostomy care knowledge at screening and achieved an overall 45.71% correct response rate. No negative comments regarding the algorithm were recorded by participants. The new standardized Ostomy Algorithm remains the only face, content, and construct validated digital clinical decision instrument currently available. Further research on application at the bedside while tracking patient outcomes is warranted.

  2. An O(NlogN Algorithm for Region Definition Using Channels/Switchboxes and Ordering Assignment

    Directory of Open Access Journals (Sweden)

    Jin-Tai Yan

    1996-01-01

    Full Text Available For a building block placement, the routing space can be further partitioned into channels and switchboxes. In general, the definition of switchboxes releases the cyclic channel precedence constraints and further yields a safe routing ordering process. However, switchbox routing is more difficult than channel routing. In this paper, an O(NlogN region definition and ordering assignment (RDAOA algorithm is proposed to minimize the number of switchboxes for the routing phase, where N is the number of vertices in a channel precedence graph. Several examples have been tested on the proposed algorithm, and the experimental results are listed and compared.

  3. Benchmarking protein classification algorithms via supervised cross-validation

    NARCIS (Netherlands)

    Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.

    2008-01-01

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold,

  4. Validation of differential gene expression algorithms: Application comparing fold-change estimation to hypothesis testing

    Directory of Open Access Journals (Sweden)

    Bickel David R

    2010-01-01

    Full Text Available Abstract Background Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable. Results Two methodologies for assessing predictive error are described: a cross-validation method and a posterior predictive method. As a nonparametric method of estimating prediction error from observed expression levels, cross validation provides an empirical approach to assessing algorithms for detecting differential gene expression that is fully justified for large numbers of biological replicates. Because it leverages the knowledge that only a small portion of genes are differentially expressed, the posterior predictive method is expected to provide more reliable estimates of algorithm performance, allaying concerns about limited biological replication. In practice, the posterior predictive method can assess when its approximations are valid and when they are inaccurate. Under conditions in which its approximations are valid, it corroborates the results of cross validation. Both comparison methodologies are applicable to both single-channel and dual-channel microarrays. For the data sets considered, estimating prediction error by cross validation demonstrates that empirical Bayes methods based on hierarchical models tend to outperform algorithms based on selecting genes by their fold changes or by non-hierarchical model-selection criteria. (The latter two approaches have comparable

  5. Validation of neural spike sorting algorithms without ground-truth information.

    Science.gov (United States)

    Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F

    2016-05-01

    The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  7. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  9. Genetic algorithms for protein threading.

    Science.gov (United States)

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  10. Time scale algorithm: Definition of ensemble time and possible uses of the Kalman filter

    Science.gov (United States)

    Tavella, Patrizia; Thomas, Claudine

    1990-01-01

    The comparative study of two time scale algorithms, devised to satisfy different but related requirements, is presented. They are ALGOS(BIPM), producing the international reference TAI at the Bureau International des Poids et Mesures, and AT1(NIST), generating the real-time time scale AT1 at the National Institute of Standards and Technology. In each case, the time scale is a weighted average of clock readings, but the weight determination and the frequency prediction are different because they are adapted to different purposes. The possibility of using a mathematical tool, such as the Kalman filter, together with the definition of the time scale as a weighted average, is also analyzed. Results obtained by simulation are presented.

  11. Validation and Algorithms Comparative Study for Microwave Remote Sensing of Snow Depth over China

    International Nuclear Information System (INIS)

    Bin, C J; Qiu, Y B; Shi, L J

    2014-01-01

    In this study, five different snow algorithms (Chang algorithm, GSFC 96 algorithm, AMSR-E SWE algorithm, Improved Tibetan Plateau algorithm and Savoie algorithm) were selected to validate the accuracy of snow algorithms over China. These algorithms were compared for the accuracy of snow depth algorithms with AMSR-E brightness temperature data and ground measurements on February 10-12, 2010. Results showed that the GSFC 96 algorithm was more suitable in Xinjiang with the RMSE range from 6.85cm to 7.48 cm; in Inner Mongolia and Northeast China. Improved Tibetan Plateau algorithm is superior to the other four algorithms with the RMSE of 5.46cm∼6.11cm and 6.21cm∼7.83cm respectively; due to the lack of ground measurements, we couldn't get valid statistical results over the Tibetan Plateau. However, the mean relative error (MRE) of the selected algorithms was ranging from 37.95% to 189.13% in four study areas, which showed that the accuracy of the five snow depth algorithms is limited over China

  12. Using linked electronic data to validate algorithms for health outcomes in administrative databases.

    Science.gov (United States)

    Lee, Wan-Ju; Lee, Todd A; Pickard, Alan Simon; Shoaibi, Azadeh; Schumock, Glen T

    2015-08-01

    The validity of algorithms used to identify health outcomes in claims-based and administrative data is critical to the reliability of findings from observational studies. The traditional approach to algorithm validation, using medical charts, is expensive and time-consuming. An alternative method is to link the claims data to an external, electronic data source that contains information allowing confirmation of the event of interest. In this paper, we describe this external linkage validation method and delineate important considerations to assess the feasibility and appropriateness of validating health outcomes using this approach. This framework can help investigators decide whether to pursue an external linkage validation method for identifying health outcomes in administrative/claims data.

  13. Performance Evaluation of Spectral Clustering Algorithm using Various Clustering Validity Indices

    OpenAIRE

    M. T. Somashekara; D. Manjunatha

    2014-01-01

    In spite of the popularity of spectral clustering algorithm, the evaluation procedures are still in developmental stage. In this article, we have taken benchmarking IRIS dataset for performing comparative study of twelve indices for evaluating spectral clustering algorithm. The results of the spectral clustering technique were also compared with k-mean algorithm. The validity of the indices was also verified with accuracy and (Normalized Mutual Information) NMI score. Spectral clustering algo...

  14. Safety, reliability, and validity of a physiologic definition of bronchopulmonary dysplasia.

    Science.gov (United States)

    Walsh, Michele C; Wilson-Costello, Deanna; Zadell, Arlene; Newman, Nancy; Fanaroff, Avroy

    2003-09-01

    Bronchopulmonary dysplasia (BPD) is the focus of many intervention trials, yet the outcome measure when based solely on oxygen administration may be confounded by differing criteria for oxygen administration between physicians. Thus, we wished to define BPD by a standardized oxygen saturation monitoring at 36 weeks corrected age, and compare this physiologic definition with the standard clinical definition of BPD based solely on oxygen administration. A total of 199 consecutive very low birthweight infants (VLBW, 501 to 1500 g birthweight) were assessed prospectively at 36+/-1 weeks corrected age. Neonates on positive pressure support or receiving >30% supplemental oxygen were assigned the outcome BPD. Those receiving or =88% for 60 minutes) or "BPD" (saturation reliability, test-retest reliability, and validity of the physiologic definition vs the clinical definition were assessed. A total of 199 VLBW were assessed, of whom 45 (36%) were diagnosed with BPD by the clinical definition of oxygen use at 36 weeks corrected age. The physiologic definition identified 15 infants treated with oxygen who successfully passed the saturation monitoring test in room air. The physiologic definition diagnosed BPD in 30 (24%) of the cohort. All infants were safely studied. The test was highly reliable (inter-rater reliability, kappa=1.0; test-retest reliability, kappa=0.83) and highly correlated with discharge home in oxygen, length of hospital stay, and hospital readmissions in the first year of life. The physiologic definition of BPD is safe, feasible, reliable, and valid and improves the precision of the diagnosis of BPD. This may be of benefit in future multicenter clinical trials.

  15. Development and validation of an algorithm for laser application in wound treatment

    Directory of Open Access Journals (Sweden)

    Diequison Rite da Cunha

    2017-12-01

    Full Text Available ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05. Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962. Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds.

  16. Design requirements and development of an airborne descent path definition algorithm for time navigation

    Science.gov (United States)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  17. Positive predictive value of a register-based algorithm using the Danish National Registries to identify suicidal events.

    Science.gov (United States)

    Gasse, Christiane; Danielsen, Andreas Aalkjaer; Pedersen, Marianne Giørtz; Pedersen, Carsten Bøcker; Mors, Ole; Christensen, Jakob

    2018-04-17

    It is not possible to fully assess intention of self-harm and suicidal events using information from administrative databases. We conducted a validation study of intention of suicide attempts/self-harm contacts identified by a commonly applied Danish register-based algorithm (DK-algorithm) based on hospital discharge diagnosis and emergency room contacts. Of all 101 530 people identified with an incident suicide attempt/self-harm contact at Danish hospitals between 1995 and 2012 using the DK-algorithm, we selected a random sample of 475 people. We validated the DK-algorithm against medical records applying the definitions and terminology of the Columbia Classification Algorithm of Suicide Assessment of suicidal events, nonsuicidal events, and indeterminate or potentially suicidal events. We calculated positive predictive values (PPVs) of the DK-algorithm to identify suicidal events overall, by gender, age groups, and calendar time. We retrieved medical records for 357 (75%) people. The PPV of the DK-algorithm to identify suicidal events was 51.5% (95% CI: 46.4-56.7) overall, 42.7% (95% CI: 35.2-50.5) in males, and 58.5% (95% CI: 51.6-65.1) in females. The PPV varied further across age groups and calendar time. After excluding cases identified via the DK-algorithm by unspecific codes of intoxications and injury, the PPV improved slightly (56.8% [95% CI: 50.0-63.4]). The DK-algorithm can reliably identify self-harm with suicidal intention in 52% of the identified cases of suicide attempts/self-harm. The PPVs could be used for quantitative bias analysis and implemented as weights in future studies to estimate the proportion of suicidal events among cases identified via the DK-algorithm. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Construct Validity and Case Validity in Assessment

    Science.gov (United States)

    Teglasi, Hedwig; Nebbergall, Allison Joan; Newman, Daniel

    2012-01-01

    Clinical assessment relies on both "construct validity", which focuses on the accuracy of conclusions about a psychological phenomenon drawn from responses to a measure, and "case validity", which focuses on the synthesis of the full range of psychological phenomena pertaining to the concern or question at hand. Whereas construct validity is…

  19. GCOM-W soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  20. Validation of coding algorithms for the identification of patients hospitalized for alcoholic hepatitis using administrative data.

    Science.gov (United States)

    Pang, Jack X Q; Ross, Erin; Borman, Meredith A; Zimmer, Scott; Kaplan, Gilaad G; Heitman, Steven J; Swain, Mark G; Burak, Kelly W; Quan, Hude; Myers, Robert P

    2015-09-11

    Epidemiologic studies of alcoholic hepatitis (AH) have been hindered by the lack of a validated International Classification of Disease (ICD) coding algorithm for use with administrative data. Our objective was to validate coding algorithms for AH using a hospitalization database. The Hospital Discharge Abstract Database (DAD) was used to identify consecutive adults (≥18 years) hospitalized in the Calgary region with a diagnosis code for AH (ICD-10, K70.1) between 01/2008 and 08/2012. Medical records were reviewed to confirm the diagnosis of AH, defined as a history of heavy alcohol consumption, elevated AST and/or ALT (34 μmol/L, and elevated INR. Subgroup analyses were performed according to the diagnosis field in which the code was recorded (primary vs. secondary) and AH severity. Algorithms that incorporated ICD-10 codes for cirrhosis and its complications were also examined. Of 228 potential AH cases, 122 patients had confirmed AH, corresponding to a positive predictive value (PPV) of 54% (95% CI 47-60%). PPV improved when AH was the primary versus a secondary diagnosis (67% vs. 21%; P codes for ascites (PPV 75%; 95% CI 63-86%), cirrhosis (PPV 60%; 47-73%), and gastrointestinal hemorrhage (PPV 62%; 51-73%) had improved performance, however, the prevalence of these diagnoses in confirmed AH cases was low (29-39%). In conclusion the low PPV of the diagnosis code for AH suggests that caution is necessary if this hospitalization database is used in large-scale epidemiologic studies of this condition.

  1. The 2018 Definition of Periprosthetic Hip and Knee Infection: An Evidence-Based and Validated Criteria.

    Science.gov (United States)

    Parvizi, Javad; Tan, Timothy L; Goswami, Karan; Higuera, Carlos; Della Valle, Craig; Chen, Antonia F; Shohat, Noam

    2018-05-01

    The introduction of the Musculoskeletal Infection Society (MSIS) criteria for periprosthetic joint infection (PJI) in 2011 resulted in improvements in diagnostic confidence and research collaboration. The emergence of new diagnostic tests and the lessons we have learned from the past 7 years using the MSIS definition, prompted us to develop an evidence-based and validated updated version of the criteria. This multi-institutional study of patients undergoing revision total joint arthroplasty was conducted at 3 academic centers. For the development of the new diagnostic criteria, PJI and aseptic patient cohorts were stringently defined: PJI cases were defined using only major criteria from the MSIS definition (n = 684) and aseptic cases underwent one-stage revision for a noninfective indication and did not fail within 2 years (n = 820). Serum C-reactive protein (CRP), D-dimer, erythrocyte sedimentation rate were investigated, as well as synovial white blood cell count, polymorphonuclear percentage, leukocyte esterase, alpha-defensin, and synovial CRP. Intraoperative findings included frozen section, presence of purulence, and isolation of a pathogen by culture. A stepwise approach using random forest analysis and multivariate regression was used to generate relative weights for each diagnostic marker. Preoperative and intraoperative definitions were created based on beta coefficients. The new definition was then validated on an external cohort of 222 patients with PJI who subsequently failed with reinfection and 200 aseptic patients. The performance of the new criteria was compared to the established MSIS and the prior International Consensus Meeting definitions. Two positive cultures or the presence of a sinus tract were considered as major criteria and diagnostic of PJI. The calculated weights of an elevated serum CRP (>1 mg/dL), D-dimer (>860 ng/mL), and erythrocyte sedimentation rate (>30 mm/h) were 2, 2, and 1 points, respectively. Furthermore, elevated

  2. Genetic Algorithms for Case Adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Salem, A M [Computer Science Dept, Faculty of Computer and Information Sciences, Ain Shams University, Cairo (Egypt); Mohamed, A H [Solid State Dept., (NCRRT), Cairo (Egypt)

    2008-07-01

    Case based reasoning (CBR) paradigm has been widely used to provide computer support for recalling and adapting known cases to novel situations. Case adaptation algorithms generally rely on knowledge based and heuristics in order to change the past solutions to solve new problems. However, case adaptation has always been a difficult process to engineers within (CBR) cycle. Its difficulties can be referred to its domain dependency; and computational cost. In an effort to solve this problem, this research explores a general-purpose method that applying a genetic algorithm (GA) to CBR adaptation. Therefore, it can decrease the computational complexity of the search space in the problems having a great dependency on their domain knowledge. The proposed model can be used to perform a variety of design tasks on a broad set of application domains. However, it has been implemented for the tablet formulation as a domain of application. The proposed system has improved the performance of the CBR design systems.

  3. Genetic Algorithms for Case Adaptation

    International Nuclear Information System (INIS)

    Salem, A.M.; Mohamed, A.H.

    2008-01-01

    Case based reasoning (CBR) paradigm has been widely used to provide computer support for recalling and adapting known cases to novel situations. Case adaptation algorithms generally rely on knowledge based and heuristics in order to change the past solutions to solve new problems. However, case adaptation has always been a difficult process to engineers within (CBR) cycle. Its difficulties can be referred to its domain dependency; and computational cost. In an effort to solve this problem, this research explores a general-purpose method that applying a genetic algorithm (GA) to CBR adaptation. Therefore, it can decrease the computational complexity of the search space in the problems having a great dependency on their domain knowledge. The proposed model can be used to perform a variety of design tasks on a broad set of application domains. However, it has been implemented for the tablet formulation as a domain of application. The proposed system has improved the performance of the CBR design systems

  4. Development of a Gestational Age-Specific Case Definition for Neonatal Necrotizing Enterocolitis.

    Science.gov (United States)

    Battersby, Cheryl; Longford, Nick; Costeloe, Kate; Modi, Neena

    2017-03-01

    Necrotizing enterocolitis (NEC) is a major cause of neonatal morbidity and mortality. Preventive and therapeutic research, surveillance, and quality improvement initiatives are hindered by variations in case definitions. To develop a gestational age (GA)-specific case definition for NEC. We conducted a prospective 34-month population study using clinician-recorded findings from the UK National Neonatal Research Database between December 2011 and September 2014 across all 163 neonatal units in England. We split study data into model development and validation data sets and categorized GA into groups (group 1, less than 26 weeks' GA; group 2, 26 to less than 30 weeks' GA; group 3, 30 to less than 37 weeks' GA; group 4, 37 or more weeks' GA). We entered GA, birth weight z score, and clinical and abdominal radiography findings as candidate variables in a logistic regression model, performed model fitting 1000 times, averaged the predictions, and used estimates from the fitted model to develop an ordinal NEC score and cut points to develop a dichotomous case definition based on the highest area under the receiver operating characteristic curves [AUCs] and positive predictive values [PPVs]. Abdominal radiography performed to investigate clinical concerns. Ordinal NEC likelihood score, dichotomous case definition, and GA-specific probability plots. Of the 3866 infants, the mean (SD) birth weight was 2049.1 (1941.7) g and mean (SD) GA was 32 (5) weeks; 2032 of 3663 (55.5%) were male. The total included 2978 infants (77.0%) without NEC and 888 (23.0%) with NEC. Infants with NEC in group 1 were less likely to present with pneumatosis (31.1% vs 47.2%; P = .01), blood in stool (11.8% vs 29.6%; P definition were 2 or greater for infants in groups 1 and 2, 3 or greater for infants in group 3, and 4 or greater for infants in group 4. The ordinal NEC score and dichotomous case definition discriminated well between infants with (AUC, 87%) and without (AUC, 80%) NEC. The case

  5. Validating a pragmatic definition of shock in adult patients presenting to the ED.

    Science.gov (United States)

    Li, Yan-ling; Chan, Cangel Pui-yee; Sin, King-keung; Chan, Stewart S W; Lin, Pei-yi; Chen, Xiao-hui; Smith, Brendan E; Joynt, Gavin M; Graham, Colin A; Rainer, Timothy H

    2014-11-01

    The importance of the early recognition of shock in patients presenting to emergency departments is well recognized, but at present, there is no agreed practical definition for undifferentiated shock. The main aim of this study was to validate an a priori clinical definition of shock against 28-day mortality. This prospective, observational, cross-sectional, single-center study was conducted in Hong Kong, China. Data were collected between July 1, 2012, and January 31, 2013. An a priori definition of shock was designed, whereby patients admitted to the resuscitation room or high dependency area of the emergency department were divided into 1 of 3 groups-no shock, possible shock, and shock. The primary outcome was 28-day mortality. Secondary outcomes were in-hospital mortality or admission to the intensive or coronary care unit. A total of 111 patients (mean age, 67.2 ± 17.1 years; male = 69 [62%]) were recruited, of which 22 were classified as no shock, 54 as possible shock, and 35 as shock. Systolic blood pressure, mean arterial pressure, lactate, and base deficit correlated well with shock classifications (P definition of undifferentiated shock has been proposed and validated in a group of patients presenting to an emergency department in Hong Kong. This definition needs further validation in a larger population and other settings. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. A simple derivation and analysis of a helical cone beam tomographic algorithm for long object imaging via a novel definition of region of interest

    International Nuclear Information System (INIS)

    Hu Jicun; Tam, Kwok; Johnson, Roger H

    2004-01-01

    We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom

  7. Case-Mix for Performance Management: A Risk Algorithm Based on ICD-10-CM.

    Science.gov (United States)

    Gao, Jian; Moran, Eileen; Almenoff, Peter L

    2018-06-01

    Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.

  8. Validity and reliability of three definitions of hip osteoarthritis: cross sectional and longitudinal approach.

    Science.gov (United States)

    Reijman, M; Hazes, J M W; Pols, H A P; Bernsen, R M D; Koes, B W; Bierma-Zeinstra, S M A

    2004-11-01

    To compare the reliability and validity in a large open population of three frequently used radiological definitions of hip osteoarthritis (OA): Kellgren and Lawrence grade, minimal joint space (MJS), and Croft grade; and to investigate whether the validity of the three definitions of hip OA is sex dependent. from the Rotterdam study (aged > or= 55 years, n = 3585) were evaluated. The inter-rater reliability was tested in a random set of 148 x rays. The validity was expressed as the ability to identify patients who show clinical symptoms of hip OA (construct validity) and as the ability to predict total hip replacement (THR) at follow up (predictive validity). Inter-rater reliability was similar for the Kellgren and Lawrence grade and MJS (kappa statistics 0.68 and 0.62, respectively) but lower for Croft's grade (kappa statistic, 0.51). The Kellgren and Lawrence grade and MJS showed the strongest associations with clinical symptoms of hip OA. Sex appeared to be an effect modifier for Kellgren and Lawrence and MJS definitions, women showing a stronger association between grading and symptoms than men. However, the sex dependency was attributed to differences in height between women and men. The Kellgren and Lawrence grade showed the highest predictive value for THR at follow up. Based on these findings, Kellgren and Lawrence still appears to be a useful OA definition for epidemiological studies focusing on the presence of hip OA.

  9. Computer-assisted expert case definition in electronic health records.

    Science.gov (United States)

    Walker, Alexander M; Zhou, Xiaofeng; Ananthakrishnan, Ashwin N; Weiss, Lisa S; Shen, Rongjun; Sobel, Rachel E; Bate, Andrew; Reynolds, Robert F

    2016-02-01

    To describe how computer-assisted presentation of case data can lead experts to infer machine-implementable rules for case definition in electronic health records. As an illustration the technique has been applied to obtain a definition of acute liver dysfunction (ALD) in persons with inflammatory bowel disease (IBD). The technique consists of repeatedly sampling new batches of case candidates from an enriched pool of persons meeting presumed minimal inclusion criteria, classifying the candidates by a machine-implementable candidate rule and by a human expert, and then updating the rule so that it captures new distinctions introduced by the expert. Iteration continues until an update results in an acceptably small number of changes to form a final case definition. The technique was applied to structured data and terms derived by natural language processing from text records in 29,336 adults with IBD. Over three rounds the technique led to rules with increasing predictive value, as the experts identified exceptions, and increasing sensitivity, as the experts identified missing inclusion criteria. In the final rule inclusion and exclusion terms were often keyed to an ALD onset date. When compared against clinical review in an independent test round, the derived final case definition had a sensitivity of 92% and a positive predictive value of 79%. An iterative technique of machine-supported expert review can yield a case definition that accommodates available data, incorporates pre-existing medical knowledge, is transparent and is open to continuous improvement. The expert updates to rules may be informative in themselves. In this limited setting, the final case definition for ALD performed better than previous, published attempts using expert definitions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  11. An administrative data validation study of the accuracy of algorithms for identifying rheumatoid arthritis: the influence of the reference standard on algorithm performance.

    Science.gov (United States)

    Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen

    2014-06-23

    We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.

  12. Validity and reliability of three definitions of hip osteoarthritis: cross sectional and longitudinal approach

    OpenAIRE

    Reijman, Max; Hazes, Mieke; Pols, Huib; Bernsen, Roos; Koes, Bart; Bierma-Zeinstra, Sita

    2004-01-01

    textabstractOBJECTIVES: To compare the reliability and validity in a large open population of three frequently used radiological definitions of hip osteoarthritis (OA): Kellgren and Lawrence grade, minimal joint space (MJS), and Croft grade; and to investigate whether the validity of the three definitions of hip OA is sex dependent. METHODS: SUBJECTS: from the Rotterdam study (aged > or= 55 years, n = 3585) were evaluated. The inter-rater reliability was tested in a random set of 148 x rays. ...

  13. Development and validation of algorithms to differentiate ductal carcinoma in situ from invasive breast cancer within administrative claims data.

    Science.gov (United States)

    Hirth, Jacqueline M; Hatch, Sandra S; Lin, Yu-Li; Giordano, Sharon H; Silva, H Colleen; Kuo, Yong-Fang

    2018-04-18

    Overtreatment is a common concern for patients with ductal carcinoma in situ (DCIS), but this entity is difficult to distinguish from invasive breast cancers in administrative claims data sets because DCIS often is coded as invasive breast cancer. Therefore, the authors developed and validated algorithms to select DCIS cases from administrative claims data to enable outcomes research in this type of data. This retrospective cohort using invasive breast cancer and DCIS cases included women aged 66 to 70 years in the 2004 through 2011 Texas Cancer Registry (TCR) data linked to Medicare administrative claims data. TCR records were used as "gold" standards to evaluate the sensitivity, specificity, and positive predictive value (PPV) of 2 algorithms. Women with a biopsy enrolled in Medicare parts A and B at 12 months before and 6 months after their first biopsy without a second incident diagnosis of DCIS or invasive breast cancer within 12 months in the TCR were included. Women in 2010 Medicare data were selected to test the algorithms in a general sample. In the TCR data set, a total of 6907 cases met inclusion criteria, with 1244 DCIS cases. The first algorithm had a sensitivity of 79%, a specificity of 89%, and a PPV of 62%. The second algorithm had a sensitivity of 50%, a specificity of 97%. and a PPV of 77%. Among women in the general sample, the specificity was high and the sensitivity was similar for both algorithms. However, the PPV was approximately 6% to 7% lower. DCIS frequently is miscoded as invasive breast cancer, and thus the proposed algorithms are useful to examine DCIS outcomes using data sets not linked to cancer registries. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.

  14. From patient care to research: a validation study examining the factors contributing to data quality in a primary care electronic medical record database.

    Science.gov (United States)

    Coleman, Nathan; Halas, Gayle; Peeler, William; Casaclang, Natalie; Williamson, Tyler; Katz, Alan

    2015-02-05

    Electronic Medical Records (EMRs) are increasingly used in the provision of primary care and have been compiled into databases which can be utilized for surveillance, research and informing practice. The primary purpose of these records is for the provision of individual patient care; validation and examination of underlying limitations is crucial for use for research and data quality improvement. This study examines and describes the validity of chronic disease case definition algorithms and factors affecting data quality in a primary care EMR database. A retrospective chart audit of an age stratified random sample was used to validate and examine diagnostic algorithms applied to EMR data from the Manitoba Primary Care Research Network (MaPCReN), part of the Canadian Primary Care Sentinel Surveillance Network (CPCSSN). The presence of diabetes, hypertension, depression, osteoarthritis and chronic obstructive pulmonary disease (COPD) was determined by review of the medical record and compared to algorithm identified cases to identify discrepancies and describe the underlying contributing factors. The algorithm for diabetes had high sensitivity, specificity and positive predictive value (PPV) with all scores being over 90%. Specificities of the algorithms were greater than 90% for all conditions except for hypertension at 79.2%. The largest deficits in algorithm performance included poor PPV for COPD at 36.7% and limited sensitivity for COPD, depression and osteoarthritis at 72.0%, 73.3% and 63.2% respectively. Main sources of discrepancy included missing coding, alternative coding, inappropriate diagnosis detection based on medications used for alternate indications, inappropriate exclusion due to comorbidity and loss of data. Comparison to medical chart review shows that at MaPCReN the CPCSSN case finding algorithms are valid with a few limitations. This study provides the basis for the validated data to be utilized for research and informs users of its

  15. Smoothed analysis: analysis of algorithms beyond worst case

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, Heiko

    2011-01-01

    Many algorithms perform very well in practice, but have a poor worst-case performance. The reason for this discrepancy is that worst-case analysis is often a way too pessimistic measure for the performance of an algorithm. In order to provide a more realistic performance measure that can explain the

  16. Reliability and validity of four alternative definitions of rapid-cycling bipolar disorder.

    Science.gov (United States)

    Maj, M; Pirozzi, R; Formicola, A M; Tortorella, A

    1999-09-01

    This study tested the reliability and validity of four definitions of rapid cycling. Two trained psychiatrists, using the Schedule for Affective Disorders and Schizophrenia, independently assessed 210 patients with bipolar disorder. They checked whether each patient met four definitions of rapid cycling: one consistent with DSM-IV criteria, one waiving criteria for duration of affective episodes, one waiving such criteria and requiring at least one switch from mania to depression or vice versa during the reference year, and one waiving duration criteria and requiring at least 8 weeks of fully symptomatic affective illness during the reference year. The interrater reliability was calculated by Cohen's kappa statistic. Patients who met each definition according to both psychiatrists were compared to those who did not meet any definition (nonrapid-cycling group) on demographic and clinical variables. All patients were followed up for 1 year. Kappa values were 0.93, 0.73, 0.75, and 0.80, respectively, for the four definitions of rapid cycling. The groups meeting the second and third definitions included significantly more female and bipolar II patients than did the nonrapid-cycling group. Those two groups also had the lowest proportion of patients with a favorable lithium prophylaxis outcome and the highest stability of the rapid-cycling pattern on follow-up. The four groups of rapid-cycling patients did not differ significantly among themselves on any of the assessed variables. The expression "rapid cycling" encompasses a spectrum of conditions. The DSM-IV definition, although quite reliable, covers only part of this spectrum, and the conditions that are excluded are very typical in terms of key validators and are relatively stable over time.

  17. Validating the LASSO algorithm by unmixing spectral signatures in multicolor phantoms

    Science.gov (United States)

    Samarov, Daniel V.; Clarke, Matthew; Lee, Ji Yoon; Allen, David; Litorja, Maritoni; Hwang, Jeeseong

    2012-03-01

    As hyperspectral imaging (HSI) sees increased implementation into the biological and medical elds it becomes increasingly important that the algorithms being used to analyze the corresponding output be validated. While certainly important under any circumstance, as this technology begins to see a transition from benchtop to bedside ensuring that the measurements being given to medical professionals are accurate and reproducible is critical. In order to address these issues work has been done in generating a collection of datasets which could act as a test bed for algorithms validation. Using a microarray spot printer a collection of three food color dyes, acid red 1 (AR), brilliant blue R (BBR) and erioglaucine (EG) are mixed together at dierent concentrations in varying proportions at dierent locations on a microarray chip. With the concentration and mixture proportions known at each location, using HSI an algorithm should in principle, based on estimates of abundances, be able to determine the concentrations and proportions of each dye at each location on the chip. These types of data are particularly important in the context of medical measurements as the resulting estimated abundances will be used to make critical decisions which can have a serious impact on an individual's health. In this paper we present a novel algorithm for processing and analyzing HSI data based on the LASSO algorithm (similar to "basis pursuit"). The LASSO is a statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundances in an HSI scene these so called "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The algorithm we present takes the general framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. We show our algorithm's improvement

  18. A Case Definition for Children with Myalgic Encephalomyelitis/Chronic Fatigue Syndrome

    OpenAIRE

    Leonard A. Jason; Nicole Porter; Elizabeth Shelleby; David S. Bell; Charles W. Lapp; Kathy Rowe; Kenny De Meirleir

    2008-01-01

    The case definition for chronic fatigue syndrome was developed for adults (Fukuda et al. 1994), and this case definition may not be appropriate for use with children and adolescents. The lack of application of a consistent pediatric definition for this illness and the lack of a reliable instrument to assess it might lead to studies which lack sensitivity and specificity. In this article, a case definition is presented that has been endorsed by the International Association of ME/CFS.

  19. Using virtual environment for autonomous vehicle algorithm validation

    Science.gov (United States)

    Levinskis, Aleksandrs

    2018-04-01

    This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.

  20. Validation and Intercomparison of Ocean Color Algorithms for Estimating Particulate Organic Carbon in the Oceans

    Directory of Open Access Journals (Sweden)

    Hayley Evers-King

    2017-08-01

    Full Text Available Particulate Organic Carbon (POC plays a vital role in the ocean carbon cycle. Though relatively small compared with other carbon pools, the POC pool is responsible for large fluxes and is linked to many important ocean biogeochemical processes. The satellite ocean-color signal is influenced by particle composition, size, and concentration and provides a way to observe variability in the POC pool at a range of temporal and spatial scales. To provide accurate estimates of POC concentration from satellite ocean color data requires algorithms that are well validated, with uncertainties characterized. Here, a number of algorithms to derive POC using different optical variables are applied to merged satellite ocean color data provided by the Ocean Color Climate Change Initiative (OC-CCI and validated against the largest database of in situ POC measurements currently available. The results of this validation exercise indicate satisfactory levels of performance from several algorithms (highest performance was observed from the algorithms of Loisel et al., 2002; Stramski et al., 2008 and uncertainties that are within the requirements of the user community. Estimates of the standing stock of the POC can be made by applying these algorithms, and yield an estimated mixed-layer integrated global stock of POC between 0.77 and 1.3 Pg C of carbon. Performance of the algorithms vary regionally, suggesting that blending of region-specific algorithms may provide the best way forward for generating global POC products.

  1. Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Young-Jae Song

    2009-07-01

    Full Text Available Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  2. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study.

    Science.gov (United States)

    Bobo, William V; Cooper, William O; Stein, C Michael; Olfson, Mark; Mounsey, Jackie; Daugherty, James; Ray, Wayne A

    2012-08-24

    We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6-24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%), of which 41 (89.1%) met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes.

  3. Validity and reliability of three definitions of hip osteoarthritis: Cross sectional and longitudinal approach

    NARCIS (Netherlands)

    M. Reijman (Max); J.M.W. Hazes (Mieke); H.A.P. Pols (Huib); R.M.D. Bernsen (Roos); B.W. Koes (Bart); S.M. Bierma-Zeinstra (Sita)

    2004-01-01

    textabstractObjectives: To compare the reliability and validity in a large open population of three frequently used radiological definitions of hip osteoarthritis (OA): Kellgren and Lawrence grade, minimal joint space (MJS), and Croft grade; and to investigate whether the validity of the three

  4. Wind turbines and health: An examination of a proposed case definition.

    Science.gov (United States)

    McCunney, Robert J; Morfeld, Peter; Colby, W David; Mundt, Kenneth A

    2015-01-01

    Renewable energy demands have increased the need for new wind farms. In turn, concerns have been raised about potential adverse health effects on nearby residents. A case definition has been proposed to diagnose "Adverse Health Effects in the Environs of Industrial Wind Turbines" (AHE/IWT); initially in 2011 and then with an update in 2014. The authors invited commentary and in turn, we assessed its scientific merits by quantitatively evaluating its proposed application. We used binomial coefficients to quantitatively assess the potential of obtaining a diagnosis of AHE/IWT. We also reviewed the methodology and process of the development of the case definition by contrasting it with guidelines on case definition criteria of the USA Institute of Medicine. The case definition allows at least 3,264 and up to 400,000 possibilities for meeting second- and third-order criteria, once the limited first-order criteria are met. IOM guidelines for clinical case definitions were not followed. The case definition has virtually no specificity and lacks scientific support from peer-reviewed literature. If applied as proposed, its application will lead to substantial potential for false-positive assessments and missed diagnoses. Virtually any new illness that develops or any prevalent illness that worsens after the installation of wind turbines within 10 km of a residence could be considered AHE/IWT if the patient feels better away from home. The use of this case definition in the absence of a thorough medical evaluation with appropriate diagnostic studies poses risks to patients in that treatable disorders would be overlooked. The case definition has significant potential to mislead patients and its use cannot be recommended for application in any health-care or decision-making setting.

  5. Wind turbines and health: An examination of a proposed case definition

    Directory of Open Access Journals (Sweden)

    Robert J McCunney

    2015-01-01

    Full Text Available Renewable energy demands have increased the need for new wind farms. In turn, concerns have been raised about potential adverse health effects on nearby residents. A case definition has been proposed to diagnose "Adverse Health Effects in the Environs of Industrial Wind Turbines" (AHE/IWT; initially in 2011 and then with an update in 2014. The authors invited commentary and in turn, we assessed its scientific merits by quantitatively evaluating its proposed application. We used binomial coefficients to quantitatively assess the potential of obtaining a diagnosis of AHE/IWT. We also reviewed the methodology and process of the development of the case definition by contrasting it with guidelines on case definition criteria of the USA Institute of Medicine. The case definition allows at least 3,264 and up to 400,000 possibilities for meeting second- and third-order criteria, once the limited first-order criteria are met. IOM guidelines for clinical case definitions were not followed. The case definition has virtually no specificity and lacks scientific support from peer-reviewed literature. If applied as proposed, its application will lead to substantial potential for false-positive assessments and missed diagnoses. Virtually any new illness that develops or any prevalent illness that worsens after the installation of wind turbines within 10 km of a residence could be considered AHE/IWT if the patient feels better away from home. The use of this case definition in the absence of a thorough medical evaluation with appropriate diagnostic studies poses risks to patients in that treatable disorders would be overlooked. The case definition has significant potential to mislead patients and its use cannot be recommended for application in any health-care or decision-making setting

  6. Wind turbines and health: An examination of a proposed case definition

    Science.gov (United States)

    McCunney, Robert J.; Morfeld, Peter; Colby, W. David; Mundt, Kenneth A.

    2015-01-01

    Renewable energy demands have increased the need for new wind farms. In turn, concerns have been raised about potential adverse health effects on nearby residents. A case definition has been proposed to diagnose “Adverse Health Effects in the Environs of Industrial Wind Turbines” (AHE/IWT); initially in 2011 and then with an update in 2014. The authors invited commentary and in turn, we assessed its scientific merits by quantitatively evaluating its proposed application. We used binomial coefficients to quantitatively assess the potential of obtaining a diagnosis of AHE/IWT. We also reviewed the methodology and process of the development of the case definition by contrasting it with guidelines on case definition criteria of the USA Institute of Medicine. The case definition allows at least 3,264 and up to 400,000 possibilities for meeting second- and third-order criteria, once the limited first-order criteria are met. IOM guidelines for clinical case definitions were not followed. The case definition has virtually no specificity and lacks scientific support from peer-reviewed literature. If applied as proposed, its application will lead to substantial potential for false-positive assessments and missed diagnoses. Virtually any new illness that develops or any prevalent illness that worsens after the installation of wind turbines within 10 km of a residence could be considered AHE/IWT if the patient feels better away from home. The use of this case definition in the absence of a thorough medical evaluation with appropriate diagnostic studies poses risks to patients in that treatable disorders would be overlooked. The case definition has significant potential to mislead patients and its use cannot be recommended for application in any health-care or decision-making setting. PMID:26168947

  7. Case definition and classification of leukodystrophies and leukoencephalopathies

    NARCIS (Netherlands)

    Vanderver, Adeline; Prust, Morgan; Tonduti, Davide; Mochel, Fanny; Hussey, Heather M.; Helman, Guy; Garbern, James; Eichler, Florian; Labauge, Pierre; Aubourg, Patrick; Rodriguez, Diana; Patterson, Marc C.; van Hove, Johan L. K.; Schmidt, Johanna; Wolf, Nicole I.; Boespflug-Tanguy, Odile; Schiffmann, Raphael; van der Knaap, Marjo S.

    2015-01-01

    An approved definition of the term leukodystrophy does not currently exist. The lack of a precise case definition hampers efforts to study the epidemiology and the relevance of genetic white matter disorders to public health. Thirteen experts at multiple institutions participated in iterative

  8. Case definition and classification of leukodystrophies and leukoencephalopathies

    NARCIS (Netherlands)

    Vanderver, A.; Prust, M.; Tonduti, D.; Mochel, F.; Hussey, H.M.; Helman, G.; Garbern, J.; Eichler, F.; Labauge, P.; Aubourg, P.; Rodriguez, D.; Patterson, M.C.; van Hove, J.LK.; Schmidt, J; Wolf, N.I.; Boespflug-Tanguy, O.; Schiffmann, R.; van der Knaap, M.S.

    2015-01-01

    Objective: An approved definition of the term leukodystrophy does not currently exist. The lack of a precise case definition hampers efforts to study the epidemiology and the relevance of genetic white matter disorders to public health. Method: Thirteen experts at multiple institutions participated

  9. Influenza outbreak during Sydney World Youth Day 2008: the utility of laboratory testing and case definitions on mass gathering outbreak containment.

    Directory of Open Access Journals (Sweden)

    Sebastiaan J van Hal

    Full Text Available BACKGROUND: Influenza causes annual epidemics and often results in extensive outbreaks in closed communities. To minimize transmission, a range of interventions have been suggested. For these to be effective, an accurate and timely diagnosis of influenza is required. This is confirmed by a positive laboratory test result in an individual whose symptoms are consistent with a predefined clinical case definition. However, the utility of these clinical case definitions and laboratory testing in mass gathering outbreaks remains unknown. METHODS AND RESULTS: An influenza outbreak was identified during World Youth Day 2008 in Sydney. From the data collected on pilgrims presenting to a single clinic, a Markov model was developed and validated against the actual epidemic curve. Simulations were performed to examine the utility of different clinical case definitions and laboratory testing strategies for containment of influenza outbreaks. Clinical case definitions were found to have the greatest impact on averting further cases with no added benefit when combined with any laboratory test. Although nucleic acid testing (NAT demonstrated higher utility than indirect immunofluorescence antigen or on-site point-of-care testing, this effect was lost when laboratory NAT turnaround times was included. The main benefit of laboratory confirmation was limited to identification of true influenza cases amenable to interventions such as antiviral therapy. CONCLUSIONS: Continuous re-evaluation of case definitions and laboratory testing strategies are essential for effective management of influenza outbreaks during mass gatherings.

  10. Stakeholder Perceptions of Cyberbullying Cases: Application of the Uniform Definition of Bullying.

    Science.gov (United States)

    Moreno, Megan A; Suthamjariya, Nina; Selkie, Ellen

    2018-04-01

    The Uniform Definition of Bullying was developed to address bullying and cyberbullying, and to promote consistency in measurement and policy. The purpose of this study was to understand community stakeholder perceptions of typical cyberbullying cases, and to evaluate how these case descriptions align with the Uniform Definition. In this qualitative case analysis we recruited stakeholders commonly involved in cyberbullying. We used purposeful sampling to identify and recruit adolescents and young adults, parents, and professionals representing education and health care. Participants were asked to write a typical case of cyberbullying and descriptors in the context of a group discussion. We applied content analysis to case excerpts using inductive and deductive approaches, and chi-squared tests for mixed methods analyses. A total of 68 participants contributed; participants included 73% adults and 27% adolescents and young adults. A total of 650 excerpts were coded from participants' example cases and 362 (55.6%) were consistent with components of the Uniform Definition. The most frequently mentioned component of the Uniform Definition was Aggressive Behavior (n = 218 excerpts), whereas Repeated was mentioned infrequently (n = 19). Most participants included two to three components of the Uniform Definition within an example case; none of the example cases included all components of the Uniform Definition. We found that most participants described cyberbullying cases using few components of the Uniform Definition. Findings can be applied toward considering refinement of the Uniform Definition to ensure stakeholders find it applicable to cyberbullying. Copyright © 2017 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  11. Algorithm for definition of stones components at kidney-stones illness using two-energetic digital roentgen-graphic method

    International Nuclear Information System (INIS)

    Nedavnij, O.I.; Osipov, S.P.

    2001-01-01

    Paper presents the algorithm for definition of stone composition in case of kidney-stones using two-energy digital X-ray photography. One calculated the values of p information parameter for the main types of stones within 40-150 keV energy range. It was shown that p parameter dependence on energy was not essential one (maximum 3.5% deviation), p value for various chemical compositions of kidney stones ranged from 15% (calcium phosphate and calcium oxalate) up to 70% (calcium lactate and calcium oxalate). The conducted studies enable to make a conclusion about the possibility to define material representing the heart of kidney stones using two-energy digital X-ray photography. Paper includes recommendations on selection of the optimal energy values [ru

  12. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study

    Directory of Open Access Journals (Sweden)

    Bobo William V

    2012-08-01

    Full Text Available Abstract Background We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. Methods The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6–24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. Results The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%, of which 41 (89.1% met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. Conclusion These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes.

  13. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study

    Science.gov (United States)

    2012-01-01

    Background We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. Methods The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6–24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. Results The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%), of which 41 (89.1%) met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. Conclusion These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes. PMID:22920280

  14. Seismotectonic models and CN algorithm: The case of Italy

    International Nuclear Information System (INIS)

    Costa, G.; Orozova Stanishkova, I.; Panza, G.F.; Rotwain, I.M.

    1995-07-01

    The CN algorithm is here utilized both for the intermediate term earthquake prediction and to validate the seismotectonic model of the Italian territory. Using the results of the analysis, made through the CN algorithm and taking into account the seismotectonic model, three areas, one for Northern Italy, one for Central Italy and one for Southern Italy, are defined. Two transition areas, between the three main areas are delineated. The earthquakes which occurred in these two areas contribute to the precursor phenomena identified by the CN algorithm in each main area. (author). 26 refs, 6 figs, 2 tabs

  15. Concordance between European and US case definitions of healthcare-associated infections

    Science.gov (United States)

    2012-01-01

    Background Surveillance of healthcare-associated infections (HAI) is a valuable measure to decrease infection rates. Across Europe, inter-country comparisons of HAI rates seem limited because some countries use US definitions from the US Centers for Disease Control and Prevention (CDC/NHSN) while other countries use European definitions from the Hospitals in Europe Link for Infection Control through Surveillance (HELICS/IPSE) project. In this study, we analyzed the concordance between US and European definitions of HAI. Methods An international working group of experts from seven European countries was set up to identify differences between US and European definitions and then conduct surveillance using both sets of definitions during a three-month period (March 1st -May 31st, 2010). Concordance between case definitions was estimated with Cohen’s kappa statistic (κ). Results Differences in HAI definitions were found for bloodstream infection (BSI), pneumonia (PN), urinary tract infection (UTI) and the two key terms “intensive care unit (ICU)-acquired infection” and “mechanical ventilation”. Concordance was analyzed for these definitions and key terms with the exception of UTI. Surveillance was performed in 47 ICUs and 6,506 patients were assessed. One hundred and eighty PN and 123 BSI cases were identified. When all PN cases were considered, concordance for PN was κ = 0.99 [CI 95%: 0.98-1.00]. When PN cases were divided into subgroups, concordance was κ = 0.90 (CI 95%: 0.86-0.94) for clinically defined PN and κ = 0.72 (CI 95%: 0.63-0.82) for microbiologically defined PN. Concordance for BSI was κ = 0.73 [CI 95%: 0.66-0.80]. However, BSI cases secondary to another infection site (42% of all BSI cases) are excluded when using US definitions and concordance for BSI was κ = 1.00 when only primary BSI cases, i.e. Europe-defined BSI with ”catheter” or “unknown” origin and US-defined laboratory-confirmed BSI (LCBI), were

  16. Concordance between European and US case definitions of healthcare-associated infections

    Directory of Open Access Journals (Sweden)

    Hansen Sonja

    2012-08-01

    Full Text Available Abstract Background Surveillance of healthcare-associated infections (HAI is a valuable measure to decrease infection rates. Across Europe, inter-country comparisons of HAI rates seem limited because some countries use US definitions from the US Centers for Disease Control and Prevention (CDC/NHSN while other countries use European definitions from the Hospitals in Europe Link for Infection Control through Surveillance (HELICS/IPSE project. In this study, we analyzed the concordance between US and European definitions of HAI. Methods An international working group of experts from seven European countries was set up to identify differences between US and European definitions and then conduct surveillance using both sets of definitions during a three-month period (March 1st -May 31st, 2010. Concordance between case definitions was estimated with Cohen’s kappa statistic (κ. Results Differences in HAI definitions were found for bloodstream infection (BSI, pneumonia (PN, urinary tract infection (UTI and the two key terms “intensive care unit (ICU-acquired infection” and “mechanical ventilation”. Concordance was analyzed for these definitions and key terms with the exception of UTI. Surveillance was performed in 47 ICUs and 6,506 patients were assessed. One hundred and eighty PN and 123 BSI cases were identified. When all PN cases were considered, concordance for PN was κ = 0.99 [CI 95%: 0.98-1.00]. When PN cases were divided into subgroups, concordance was κ = 0.90 (CI 95%: 0.86-0.94 for clinically defined PN and κ = 0.72 (CI 95%: 0.63-0.82 for microbiologically defined PN. Concordance for BSI was κ = 0.73 [CI 95%: 0.66-0.80]. However, BSI cases secondary to another infection site (42% of all BSI cases are excluded when using US definitions and concordance for BSI was κ = 1.00 when only primary BSI cases, i.e. Europe-defined BSI with ”catheter” or “unknown” origin and US-defined laboratory-confirmed BSI

  17. Validation of asthma recording in electronic health records: protocol for a systematic review.

    Science.gov (United States)

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-05-29

    Asthma is a common, heterogeneous disease with significant morbidity and mortality worldwide. It can be difficult to define in epidemiological studies using electronic health records as the diagnosis is based on non-specific respiratory symptoms and spirometry, neither of which are routinely registered. Electronic health records can nonetheless be valuable to study the epidemiology, management, healthcare use and control of asthma. For health databases to be useful sources of information, asthma diagnoses should ideally be validated. The primary objectives are to provide an overview of the methods used to validate asthma diagnoses in electronic health records and summarise the results of the validation studies. EMBASE and MEDLINE will be systematically searched for appropriate search terms. The searches will cover all studies in these databases up to October 2016 with no start date and will yield studies that have validated algorithms or codes for the diagnosis of asthma in electronic health records. At least one test validation measure (sensitivity, specificity, positive predictive value, negative predictive value or other) is necessary for inclusion. In addition, we require the validated algorithms to be compared with an external golden standard, such as a manual review, a questionnaire or an independent second database. We will summarise key data including author, year of publication, country, time period, date, data source, population, case characteristics, clinical events, algorithms, gold standard and validation statistics in a uniform table. This study is a synthesis of previously published studies and, therefore, no ethical approval is required. The results will be submitted to a peer-reviewed journal for publication. Results from this systematic review can be used to study outcome research on asthma and can be used to identify case definitions for asthma. CRD42016041798. © Article author(s) (or their employer(s) unless otherwise stated in the text of the

  18. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    Science.gov (United States)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  19. Validation of a current definition of early allograft dysfunction in liver transplant recipients and analysis of risk factors.

    Science.gov (United States)

    Olthoff, Kim M; Kulik, Laura; Samstein, Benjamin; Kaminski, Mary; Abecassis, Michael; Emond, Jean; Shaked, Abraham; Christie, Jason D

    2010-08-01

    Translational studies in liver transplantation often require an endpoint of graft function or dysfunction beyond graft loss. Prior definitions of early allograft dysfunction (EAD) vary, and none have been validated in a large multicenter population in the Model for End-Stage Liver Disease (MELD) era. We examined an updated definition of EAD to validate previously used criteria, and correlated this definition with graft and patient outcome. We performed a cohort study of 300 deceased donor liver transplants at 3 U.S. programs. EAD was defined as the presence of one or more of the following previously defined postoperative laboratory analyses reflective of liver injury and function: bilirubin >or=10mg/dL on day 7, international normalized ratio >or=1.6 on day 7, and alanine or aspartate aminotransferases >2000 IU/L within the first 7 days. To assess predictive validity, the EAD definition was tested for association with graft and patient survival. Risk factors for EAD were assessed using multivariable logistic regression. Overall incidence of EAD was 23.2%. Most grafts met the definition with increased bilirubin at day 7 or high levels of aminotransferases. Of recipients meeting the EAD definition, 18.8% died, as opposed to 1.8% of recipients without EAD (relative risk = 10.7 [95% confidence interval: 3.6, 31.9] P definition of EAD using objective posttransplant criteria identified a 23% incidence, and was highly associated with graft loss and patient mortality, validating previously published criteria. This definition can be used as an endpoint in translational studies aiming to identify mechanistic pathways leading to a subgroup of liver grafts with clinical expression of suboptimal function. (c) 2010 AASLD.

  20. An algorithm to compute the square root of 3x3 positive definite matrix

    International Nuclear Information System (INIS)

    Franca, L.P.

    1988-06-01

    An efficient closed form to compute the square root of a 3 x 3 positive definite matrix is presented. The derivation employs the Cayley-Hamilton theorem avoiding calculation of eigenvectors. We show that evaluation of one eigenvalue of the square root matrix is needed and can not be circumvented. The algorithm is robust and efficient. (author) [pt

  1. A prediction algorithm for first onset of major depression in the general population: development and validation.

    Science.gov (United States)

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  2. GenClust: A genetic algorithm for clustering gene expression data

    Directory of Open Access Journals (Sweden)

    Raimondi Alessandra

    2005-12-01

    Full Text Available Abstract Background Clustering is a key step in the analysis of gene expression data, and in fact, many classical clustering algorithms are used, or more innovative ones have been designed and validated for the task. Despite the widespread use of artificial intelligence techniques in bioinformatics and, more generally, data analysis, there are very few clustering algorithms based on the genetic paradigm, yet that paradigm has great potential in finding good heuristic solutions to a difficult optimization problem such as clustering. Results GenClust is a new genetic algorithm for clustering gene expression data. It has two key features: (a a novel coding of the search space that is simple, compact and easy to update; (b it can be used naturally in conjunction with data driven internal validation methods. We have experimented with the FOM methodology, specifically conceived for validating clusters of gene expression data. The validity of GenClust has been assessed experimentally on real data sets, both with the use of validation measures and in comparison with other algorithms, i.e., Average Link, Cast, Click and K-means. Conclusion Experiments show that none of the algorithms we have used is markedly superior to the others across data sets and validation measures; i.e., in many cases the observed differences between the worst and best performing algorithm may be statistically insignificant and they could be considered equivalent. However, there are cases in which an algorithm may be better than others and therefore worthwhile. In particular, experiments for GenClust show that, although simple in its data representation, it converges very rapidly to a local optimum and that its ability to identify meaningful clusters is comparable, and sometimes superior, to that of more sophisticated algorithms. In addition, it is well suited for use in conjunction with data driven internal validation measures and, in particular, the FOM methodology.

  3. Validation of the GCOM-W SCA and JAXA soil moisture algorithms

    Science.gov (United States)

    Satellite-based remote sensing of soil moisture has matured over the past decade as a result of the Global Climate Observing Mission-Water (GCOM-W) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  4. Development and validation of an online interactive, multimedia wound care algorithms program.

    Science.gov (United States)

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide

  5. Algorithms to identify colonic ischemia, complications of constipation and irritable bowel syndrome in medical claims data: development and validation.

    Science.gov (United States)

    Sands, Bruce E; Duh, Mei-Sheng; Cali, Clorinda; Ajene, Anuli; Bohn, Rhonda L; Miller, David; Cole, J Alexander; Cook, Suzanne F; Walker, Alexander M

    2006-01-01

    A challenge in the use of insurance claims databases for epidemiologic research is accurate identification and verification of medical conditions. This report describes the development and validation of claims-based algorithms to identify colonic ischemia, hospitalized complications of constipation, and irritable bowel syndrome (IBS). From the research claims databases of a large healthcare company, we selected at random 120 potential cases of IBS and 59 potential cases each of colonic ischemia and hospitalized complications of constipation. We sought the written medical records and were able to abstract 107, 57, and 51 records, respectively. We established a 'true' case status for each subject by applying standard clinical criteria to the available chart data. Comparing the insurance claims histories to the assigned case status, we iteratively developed, tested, and refined claims-based algorithms that would capture the diagnoses obtained from the medical records. We set goals of high specificity for colonic ischemia and hospitalized complications of constipation, and high sensitivity for IBS. The resulting algorithms substantially improved on the accuracy achievable from a naïve acceptance of the diagnostic codes attached to insurance claims. The specificities for colonic ischemia and serious complications of constipation were 87.2 and 92.7%, respectively, and the sensitivity for IBS was 98.9%. U.S. commercial insurance claims data appear to be usable for the study of colonic ischemia, IBS, and serious complications of constipation. (c) 2005 John Wiley & Sons, Ltd.

  6. Evaluation of algorithms to identify incident cancer cases by using French health administrative databases.

    Science.gov (United States)

    Ajrouche, Aya; Estellat, Candice; De Rycke, Yann; Tubach, Florence

    2017-08-01

    Administrative databases are increasingly being used in cancer observational studies. Identifying incident cancer in these databases is crucial. This study aimed to develop algorithms to estimate cancer incidence by using health administrative databases and to examine the accuracy of the algorithms in terms of national cancer incidence rates estimated from registries. We identified a cohort of 463 033 participants on 1 January 2012 in the Echantillon Généraliste des Bénéficiaires (EGB; a representative sample of the French healthcare insurance system). The EGB contains data on long-term chronic disease (LTD) status, reimbursed outpatient treatments and procedures, and hospitalizations (including discharge diagnoses, and costly medical procedures and drugs). After excluding cases of prevalent cancer, we applied 15 algorithms to estimate the cancer incidence rates separately for men and women in 2012 and compared them to the national cancer incidence rates estimated from French registries by indirect age and sex standardization. The most accurate algorithm for men combined information from LTD status, outpatient anticancer drugs, radiotherapy sessions and primary or related discharge diagnosis of cancer, although it underestimated the cancer incidence (standardized incidence ratio (SIR) 0.85 [0.80-0.90]). For women, the best algorithm used the same definition of the algorithm for men but restricted hospital discharge to only primary or related diagnosis with an additional inpatient procedure or drug reimbursement related to cancer and gave comparable estimates to those from registries (SIR 1.00 [0.94-1.06]). The algorithms proposed could be used for cancer incidence monitoring and for future etiological cancer studies involving French healthcare databases. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Validation of a numerical algorithm based on transformed equations

    International Nuclear Information System (INIS)

    Xu, H.; Barron, R.M.; Zhang, C.

    2003-01-01

    Generally, a typical equation governing a physical process, such as fluid flow or heat transfer, has three types of terms that involve partial derivatives, namely, the transient term, the convective terms and the diffusion terms. The major difficulty in obtaining numerical solutions of these partial differential equations is the discretization of the convective terms. The transient term is usually discretized using the first-order forward or backward differencing scheme. The diffusion terms are usually discretized using the central differencing scheme and no difficulty arises since these terms involve second-order spatial derivatives of the flow variables. The convective terms are non-linear and contain first-order spatial derivatives. The main difference between various numerical algorithms is the discretization of the convective terms. In the present study, an alternative approach to discretizing the governing equations is presented. In this algorithm, the governing equations are first transformed by introducing an exponential function to eliminate the convective terms in the equations. The proposed algorithm is applied to simulate some fluid flows with exact solutions to validate the proposed algorithm. The fluid flows used in this study are a self-designed quasi-fluid flow problem, stagnation in plane flow (Hiemenz flow), and flow between two concentric cylinders. The comparisons with the power-law scheme indicate that the proposed scheme exhibits better performance. (author)

  8. Validities of three multislice algorithms for quantitative low-energy transmission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ming, W.Q.; Chen, J.H., E-mail: jhchen123@hnu.edu.cn

    2013-11-15

    Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations.

  9. Validities of three multislice algorithms for quantitative low-energy transmission electron microscopy

    International Nuclear Information System (INIS)

    Ming, W.Q.; Chen, J.H.

    2013-01-01

    Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations

  10. Revision of clinical case definitions: influenza-like illness and severe acute respiratory infection

    Science.gov (United States)

    Qasmieh, Saba; Mounts, Anthony Wayne; Alexander, Burmaa; Besselaar, Terry; Briand, Sylvie; Brown, Caroline; Clark, Seth; Dueger, Erica; Gross, Diane; Hauge, Siri; Hirve, Siddhivinayak; Jorgensen, Pernille; Katz, Mark A; Mafi, Ali; Malik, Mamunur; McCarron, Margaret; Meerhoff, Tamara; Mori, Yuichiro; Mott, Joshua; Olivera, Maria Teresa da Costa; Ortiz, Justin R; Palekar, Rakhee; Rebelo-de-Andrade, Helena; Soetens, Loes; Yahaya, Ali Ahmed; Zhang, Wenqing; Vandemaele, Katelijn

    2018-01-01

    Abstract The formulation of accurate clinical case definitions is an integral part of an effective process of public health surveillance. Although such definitions should, ideally, be based on a standardized and fixed collection of defining criteria, they often require revision to reflect new knowledge of the condition involved and improvements in diagnostic testing. Optimal case definitions also need to have a balance of sensitivity and specificity that reflects their intended use. After the 2009–2010 H1N1 influenza pandemic, the World Health Organization (WHO) initiated a technical consultation on global influenza surveillance. This prompted improvements in the sensitivity and specificity of the case definition for influenza – i.e. a respiratory disease that lacks uniquely defining symptomology. The revision process not only modified the definition of influenza-like illness, to include a simplified list of the criteria shown to be most predictive of influenza infection, but also clarified the language used for the definition, to enhance interpretability. To capture severe cases of influenza that required hospitalization, a new case definition was also developed for severe acute respiratory infection in all age groups. The new definitions have been found to capture more cases without compromising specificity. Despite the challenge still posed in the clinical separation of influenza from other respiratory infections, the global use of the new WHO case definitions should help determine global trends in the characteristics and transmission of influenza viruses and the associated disease burden. PMID:29403115

  11. Tuberculous meningitis: a uniform case definition for use in clinical research.

    Science.gov (United States)

    Marais, Suzaan; Thwaites, Guy; Schoeman, Johan F; Török, M Estée; Misra, Usha K; Prasad, Kameshwar; Donald, Peter R; Wilkinson, Robert J; Marais, Ben J

    2010-11-01

    Tuberculous meningitis causes substantial mortality and morbidity in children and adults. More research is urgently needed to better understand the pathogenesis of disease and to improve its clinical management and outcome. A major stumbling block is the absence of standardised diagnostic criteria. The different case definitions used in various studies makes comparison of research findings difficult, prevents the best use of existing data, and limits the management of disease. To address this problem, a 3-day tuberculous meningitis workshop took place in Cape Town, South Africa, and was attended by 41 international participants experienced in the research or management of tuberculous meningitis. During the meeting, diagnostic criteria were assessed and discussed, after which a writing committee was appointed to finalise a consensus case definition for tuberculous meningitis for use in future clinical research. We present the consensus case definition together with the rationale behind the recommendations. This case definition is applicable irrespective of the patient's age, HIV infection status, or the resources available in the research setting. Consistent use of the proposed case definition will aid comparison of studies, improve scientific communication, and ultimately improve care. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Reliability of case definitions for public health surveillance assessed by Round-Robin test methodology

    Directory of Open Access Journals (Sweden)

    Claus Hermann

    2006-05-01

    Full Text Available Abstract Background Case definitions have been recognized to be important elements of public health surveillance systems. They are to assure comparability and consistency of surveillance data and have crucial impact on the sensitivity and the positive predictive value of a surveillance system. The reliability of case definitions has rarely been investigated systematically. Methods We conducted a Round-Robin test by asking all 425 local health departments (LHD and the 16 state health departments (SHD in Germany to classify a selection of 68 case examples using case definitions. By multivariate analysis we investigated factors linked to classification agreement with a gold standard, which was defined by an expert panel. Results A total of 7870 classifications were done by 396 LHD (93% and all SHD. Reporting sensitivity was 90.0%, positive predictive value 76.6%. Polio case examples had the lowest reporting precision, salmonellosis case examples the highest (OR = 0.008; CI: 0.005–0.013. Case definitions with a check-list format of clinical criteria resulted in higher reporting precision than case definitions with a narrative description (OR = 3.08; CI: 2.47–3.83. Reporting precision was higher among SHD compared to LHD (OR = 1.52; CI: 1.14–2.02. Conclusion Our findings led to a systematic revision of the German case definitions and build the basis for general recommendations for the creation of case definitions. These include, among others, that testable yes/no criteria in a check-list format is likely to improve reliability, and that software used for data transmission should be designed in strict accordance with the case definitions. The findings of this study are largely applicable to case definitions in many other countries or international networks as they share the same structural and editorial characteristics of the case definitions evaluated in this study before their revision.

  13. Enhancement of RWSN Lifetime via Firework Clustering Algorithm Validated by ANN

    Directory of Open Access Journals (Sweden)

    Ahmad Ali

    2018-03-01

    Full Text Available Nowadays, wireless power transfer is ubiquitously used in wireless rechargeable sensor networks (WSNs. Currently, the energy limitation is a grave concern issue for WSNs. However, lifetime enhancement of sensor networks is a challenging task need to be resolved. For addressing this issue, a wireless charging vehicle is an emerging technology to expand the overall network efficiency. The present study focuses on the enhancement of overall network lifetime of the rechargeable wireless sensor network. To resolve the issues mentioned above, we propose swarm intelligence based hard clustering approach using fireworks algorithm with the adaptive transfer function (FWA-ATF. In this work, the virtual clustering method has been applied in the routing process which utilizes the firework optimization algorithm. Still now, an FWA-ATF algorithm yet not applied by any researcher for RWSN. Furthermore, the validation study of the proposed method using the artificial neural network (ANN backpropagation algorithm incorporated in the present study. Different algorithms are applied to evaluate the performance of proposed technique that gives the best results in this mechanism. Numerical results indicate that our method outperforms existing methods and yield performance up to 80% regarding energy consumption and vacation time of wireless charging vehicle.

  14. Definitions Are Important: The Case of Linear Algebra

    Science.gov (United States)

    Berman, Abraham; Shvartsman, Ludmila

    2016-01-01

    In this paper we describe an experiment in a linear algebra course. The aim of the experiment was to promote the students' understanding of the studied concepts focusing on their definitions. It seems to be a given that students should understand concepts' definitions before working substantially with them. Unfortunately, in many cases they do…

  15. Pretest probability of a normal echocardiography: validation of a simple and practical algorithm for routine use.

    Science.gov (United States)

    Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard

    2014-02-01

    Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  16. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  17. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    Science.gov (United States)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  18. Sensitivity and Specificity of Suspected Case Definition Used during West Africa Ebola Epidemic.

    Science.gov (United States)

    Hsu, Christopher H; Champaloux, Steven W; Keïta, Sakoba; Martel, Lise; Bilivogui, Pepe; Knust, Barbara; McCollum, Andrea M

    2018-01-01

    Rapid early detection and control of Ebola virus disease (EVD) is contingent on accurate case definitions. Using an epidemic surveillance dataset from Guinea, we analyzed an EVD case definition developed by the World Health Organization (WHO) and used in Guinea. We used the surveillance dataset (March-October 2014; n = 2,847 persons) to identify patients who satisfied or did not satisfy case definition criteria. Laboratory confirmation determined cases from noncases, and we calculated sensitivity, specificity and predictive values. The sensitivity of the defintion was 68.9%, and the specificity of the definition was 49.6%. The presence of epidemiologic risk factors (i.e., recent contact with a known or suspected EVD case-patient) had the highest sensitivity (74.7%), and unexplained deaths had the highest specificity (92.8%). Results for case definition analyses were statistically significant (pdefinition used in Guinea contributed to improved overall sensitivity and specificity.

  19. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm.

    Science.gov (United States)

    Cui, Xiaoming; Li, Tao; Li, Xin; Zhou, Weihua

    2015-05-01

    The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0±8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. SU-F-T-431: Dosimetric Validation of Acuros XB Algorithm for Photon Dose Calculation in Water

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, L [Rajiv Gandhi Cancer Institute & Research Center, New Delhi, Delhi (India); Yadav, G; Kishore, V [Bundelkhand Institute of Engineering & Technology, Jhansi, Uttar pradesh (India); Bhushan, M; Samuvel, K; Suhail, M [Rajiv Gandhi Cancer Institute and Research Centre, New Delhi, Delhi (India)

    2016-06-15

    Purpose: To validate the Acuros XB algorithm implemented in Eclipse Treatment planning system version 11 (Varian Medical System, Inc., Palo Alto, CA, USA) for photon dose calculation. Methods: Acuros XB is a Linear Boltzmann transport equation (LBTE) solver that solves LBTE equation explicitly and gives result equivalent to Monte Carlo. 6MV photon beam from Varian Clinac-iX (2300CD) was used for dosimetric validation of Acuros XB. Percentage depth dose (PDD) and profiles (at dmax, 5, 10, 20 and 30 cm) measurements were performed in water for field size ranging from 2×2,4×4, 6×6, 10×10, 20×20, 30×30 and 40×40 cm{sup 2}. Acuros XB results were compared against measurements and anisotropic analytical algorithm (AAA) algorithm. Results: Acuros XB result shows good agreement with measurements, and were comparable to AAA algorithm. Result for PDD and profiles shows less than one percent difference from measurements, and from calculated PDD and profiles by AAA algorithm for all field size. TPS calculated Gamma error histogram values, average gamma errors in PDD curves before dmax and after dmax were 0.28, 0.15 for Acuros XB and 0.24, 0.17 for AAA respectively, average gamma error in profile curves in central region, penumbra region and outside field region were 0.17, 0.21, 0.42 for Acuros XB and 0.10, 0.22, 0.35 for AAA respectively. Conclusion: The dosimetric validation of Acuros XB algorithms in water medium was satisfactory. Acuros XB algorithm has potential to perform photon dose calculation with high accuracy, which is more desirable for modern radiotherapy environment.

  1. Administrative Algorithms to identify Avascular necrosis of bone among patients undergoing upper or lower extremity magnetic resonance imaging: a validation study.

    Science.gov (United States)

    Barbhaiya, Medha; Dong, Yan; Sparks, Jeffrey A; Losina, Elena; Costenbader, Karen H; Katz, Jeffrey N

    2017-06-19

    Studies of the epidemiology and outcomes of avascular necrosis (AVN) require accurate case-finding methods. The aim of this study was to evaluate performance characteristics of a claims-based algorithm designed to identify AVN cases in administrative data. Using a centralized patient registry from a US academic medical center, we identified all adults aged ≥18 years who underwent magnetic resonance imaging (MRI) of an upper/lower extremity joint during the 1.5 year study period. A radiologist report confirming AVN on MRI served as the gold standard. We examined the sensitivity, specificity, positive predictive value (PPV) and positive likelihood ratio (LR + ) of four algorithms (A-D) using International Classification of Diseases, 9th edition (ICD-9) codes for AVN. The algorithms ranged from least stringent (Algorithm A, requiring ≥1 ICD-9 code for AVN [733.4X]) to most stringent (Algorithm D, requiring ≥3 ICD-9 codes, each at least 30 days apart). Among 8200 patients who underwent MRI, 83 (1.0% [95% CI 0.78-1.22]) had AVN by gold standard. Algorithm A yielded the highest sensitivity (81.9%, 95% CI 72.0-89.5), with PPV of 66.0% (95% CI 56.0-75.1). The PPV of algorithm D increased to 82.2% (95% CI 67.9-92.0), although sensitivity decreased to 44.6% (95% CI 33.7-55.9). All four algorithms had specificities >99%. An algorithm that uses a single billing code to screen for AVN among those who had MRI has the highest sensitivity and is best suited for studies in which further medical record review confirming AVN is feasible. Algorithms using multiple billing codes are recommended for use in administrative databases when further AVN validation is not feasible.

  2. Competing Definitions of Schizophrenia: What Can Be Learned From Polydiagnostic Studies?

    DEFF Research Database (Denmark)

    Jansson, Lennart Bertil; Parnas, Josef

    2007-01-01

    The contemporary diagnoses of schizophrenia (sz)—Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) and International Classification of Diseases,10th Revision (ICD-10)—are widely considered as important scientific achievements. However, these algorithms were...... not a product of explicit conceptual analyses and empirical studies but defined through consensus with the purpose of improving reliability. The validity status of current definitions and of their predecessors remains unclear. The so-called "polydiagnostic approach" applies different definitions of a disorder...

  3. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaoming, E-mail: mmayzy2008@126.com; Li, Tao, E-mail: litaofeivip@163.com; Li, Xin, E-mail: lx0803@sina.com.cn; Zhou, Weihua, E-mail: wangxue0606@gmail.com

    2015-05-15

    Highlights: • High-resolution scan mode is appropriate for imaging coronary stent. • HD-detail reconstruction algorithm is stent-dedicated kernel. • The intrastent lumen visibility also depends on stent diameter and material. - Abstract: Objective: The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Materials and methods: Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. Results: The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0 ± 8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Conclusions: Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement.

  4. Claims-based definition of death in Japanese claims database: validity and implications.

    Science.gov (United States)

    Ooba, Nobuhiro; Setoguchi, Soko; Ando, Takashi; Sato, Tsugumichi; Yamaguchi, Takuhiro; Mochizuki, Mayumi; Kubota, Kiyoshi

    2013-01-01

    For the pending National Claims Database in Japan, researchers will not have access to death information in the enrollment files. We developed and evaluated a claims-based definition of death. We used healthcare claims and enrollment data between January 2005 and August 2009 for 195,193 beneficiaries aged 20 to 74 in 3 private health insurance unions. We developed claims-based definitions of death using discharge or disease status and Charlson comorbidity index (CCI). We calculated sensitivity, specificity and positive predictive values (PPVs) using the enrollment data as a gold standard in the overall population and subgroups divided by demographic and other factors. We also assessed bias and precision in two example studies where an outcome was death. The definition based on the combination of discharge/disease status and CCI provided moderate sensitivity (around 60%) and high specificity (99.99%) and high PPVs (94.8%). In most subgroups, sensitivity of the preferred definition was also around 60% but varied from 28 to 91%. In an example study comparing death rates between two anticancer drug classes, the claims-based definition provided valid and precise hazard ratios (HRs). In another example study comparing two classes of anti-depressants, the HR with the claims-based definition was biased and had lower precision than that with the gold standard definition. The claims-based definitions of death developed in this study had high specificity and PPVs while sensitivity was around 60%. The definitions will be useful in future studies when used with attention to the possible fluctuation of sensitivity in some subpopulations.

  5. Simple example of definitions of truth, validity, consistency, and completeness in quantum mechanics

    International Nuclear Information System (INIS)

    Benioff, P.

    1999-01-01

    Besides their use for efficient computation, quantum computers and quantum robots form a base for studying quantum systems that create valid physical theories using mathematics and physics. If quantum mechanics is universally applicable, then quantum mechanics must describe its own validation by these quantum systems. An essential part of this process is the development of a coherent theory of mathematics and quantum-mechanics together. It is expected that such a theory will include a coherent combination of mathematical logical concepts with quantum mechanics. That this might be possible is shown here by defining truth, validity, consistency, and completeness for a quantum-mechanical version of a simple (classical) expression enumeration machine described by Smullyan. Some of the expressions are chosen as sentences denoting the presence or absence of other expressions in the enumeration. Two of the sentences are self-referential. It is seen that, for an interpretation based on a Feynman path sum over expression paths, truth, consistency, and completeness for the quantum system have different properties than for the classical system. For instance, the truth of a sentence S is defined only on those paths containing S. It is undefined elsewhere. Also S and its negation can both be true provided they appear on separate paths. This satisfies the definition of consistency. The definitions of validity and completeness connect the dynamics of the system to the truth of the sentences. It is proved that validity implies consistency. It is seen that the requirements of validity and maximal completeness strongly restrict the allowable dynamics for the quantum system. Aspects of the existence of a valid, maximally complete dynamics are discussed. An exponentially efficient quantum computer is described that is also valid and complete for the set of sentences considered here. copyright 1999 The American Physical Society

  6. IMPLANT-ASSOCIATED PATHOLOGY: AN ALGORITHM FOR IDENTIFYING PARTICLES IN HISTOPATHOLOGIC SYNOVIALIS/SLIM DIAGNOSTICS

    Directory of Open Access Journals (Sweden)

    V. Krenn

    2014-01-01

    Full Text Available In histopathologic SLIM diagnostic (synovial-like interface membrane, SLIM apart from diagnosing periprosthetic infection particle identification has an important role to play. The differences in particle pathogenesis and variability of materials in endoprosthetics explain the particle heterogeneity that hampers the diagnostic identification of particles. For this reason, a histopathological particle algorithm has been developed. With minimal methodical complexity this histopathological particle algorithm offers a guide to prosthesis material-particle identification. Light microscopic-morphological as well as enzyme-histochemical characteristics and polarization-optical proporties have set and particles are defined by size (microparticles, macroparticles and supra- macroparticles and definitely characterized in accordance with a dichotomous principle. Based on these criteria, identification and validation of the particles was carried out in 120 joint endoprosthesis pathological cases. A histopathological particle score (HPS is proposed that summarizes the most important information for the orthopedist, material scientist and histopathologist concerning particle identification in the SLIM.

  7. Content validation of a standardized algorithm for ostomy care.

    Science.gov (United States)

    Beitz, Janice; Gerlach, Mary; Ginsburg, Pat; Ho, Marianne; McCann, Eileen; Schafer, Vickie; Scott, Vera; Stallings, Bobbie; Turnbull, Gwen

    2010-10-01

    The number of ostomy care clinician experts is limited and the majority of ostomy care is provided by non-specialized clinicians or unskilled caregivers and family. The purpose of this study was to obtain content validation data for a new standardized algorithm for ostomy care developed by expert wound ostomy continence nurse (WOCN) clinicians. After face validity was established using overall review and suggestions from WOCN experts, 166 WOCNs self-identified as having expertise in ostomy care were surveyed online for 6 weeks in 2009. Using a cross-sectional, mixed methods study design and a 30-item instrument with a 4-point Likert-type scale, the participants were asked to quantify the degree of validity of the Ostomy Algorithm's decisions and components. Participants' open-ended comments also were thematically analyzed. Using a scale of 1 to 4, the mean score of the entire algorithm was 3.8 (4 = relevant/very relevant). The algorithm's content validity index (CVI) was 0.95 (out of 1.0). Individual component mean scores ranged from 3.59 to 3.91. Individual CVIs ranged from 0.90 to 0.98. Qualitative data analysis revealed themes of difficulty associated with algorithm formatting, especially orientation and use of the Studio Alterazioni Cutanee Stomali (Study on Peristomal Skin Lesions [SACS™ Instrument]) and the inability of algorithms to capture all individual patient attributes affecting ostomy care. Positive themes included content thoroughness and the helpful clinical photos. Suggestions were offered for algorithm improvement. Study results support the strong content validity of the algorithm and research to ascertain its construct validity and effect on care outcomes is warranted.

  8. Can PSA Reflex Algorithm be a valid alternative to other PSA-based prostate cancer screening strategies?

    Science.gov (United States)

    Caldarelli, G; Troiano, G; Rosadini, D; Nante, N

    2017-01-01

    The available laboratory tests for the differential diagnosis of prostate cancer, are represented by the total PSA, the free PSA, and the free/total PSA ratio. In Italy most of doctors tend to request both total and free PSA for their patients even in cases where the total PSA doesn't justify the further request of free PSA, with a consequent growth of the costs for the National Health System. The aim of our study was to predict the saving in Euro (due to reagents) and reduction in free PSA tests, applying the "PSA Reflex" algorithm. We calculated the number of total PSA and free PSA exams performed in 2014 in the Hospital of Grosseto and, simulating the application of the "PSA Reflex" algorithm in the same year, we calculated the decrease in the number of free PSA requests and we tried to predict the Euro savings in reagents, obtained from this reduction. In 2014 in the Hospital of Grosseto 25,955 total PSA tests have been performed: 3,631 (14%) resulted greater than 10 ng / ml; 7,686 (29.6%) between 2 and 10 ng / ml; 14,638 (56.4%) lower than 2 ng / ml. The performed free PSA tests were 16904. Simulating the use of "PSA Reflex" algorithm, the free PSA tests would be performed only in cases with total PSA values between 2 and 10 ng / mL with a saving of 54.5% of free PSA exams and of 8,971 euros, only for reagents. Our study showed that the "PSA Reflex" algorithm is a valid alternative leading to a reduction of the costs. The estimated intralaboratory savings, due to the reagents, seem to be modest, however, they are followed by the additional savings due to the other diagnostic processes for prostate cancers.

  9. Hybrid Genetic Algorithm Optimization for Case Based Reasoning Systems

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2008-01-01

    The success of a CBR system largely depen ds on an effective retrieval of useful prior case for the problem. Nearest neighbor and induction are the main CBR retrieval algorithms. Each of them can be more suitable in different situations. Integrated the two retrieval algorithms can catch the advantages of both of them. But, they still have some limitations facing the induction retrieval algorithm when dealing with a noisy data, a large number of irrelevant features, and different types of data. This research utilizes a hybrid approach using genetic algorithms (GAs) to case-based induction retrieval of the integrated nearest neighbor - induction algorithm in an attempt to overcome these limitations and increase the overall classification accuracy. GAs can be used to optimize the search space of all the possible subsets of the features set. It can deal with the irrelevant and noisy features while still achieving a significant improvement of the retrieval accuracy. Therefore, the proposed CBR-GA introduces an effective general purpose retrieval algorithm that can improve the performance of CBR systems. It can be applied in many application areas. CBR-GA has proven its success when applied for different problems in real-life

  10. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    ) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure

  11. Evaluation of the WHO clinical case definition for pediatric HIV infection in Bloemfontein, South Africa.

    Science.gov (United States)

    van Gend, Christine L; Haadsma, Maaike L; Sauer, Pieter J J; Schoeman, Cornelius J

    2003-06-01

    The WHO clinical case definition for pediatric HIV infection has been designed to be used in countries where diagnostic laboratory resources are limited. We evaluated the WHO case definition to determine whether it is a useful instrument to discriminate between HIV-positive and HIV-negative children. In addition, clinical features not included in this case definition were recorded. We recorded clinical data from 300 consecutively admitted children in a state hospital in Bloemfontein, South Africa, and tested these children for HIV infection. A total of 222 children were included in the study; 69 children (31.1 per cent) were HIV positive. The sensitivity of the WHO case definition in this study was 14.5 per cent, the specificity was 98.6 per cent. Apart from weight loss and generalized dermatitis, the signs of the WHO case definition were significantly more often seen in HIV-positive than in HIV-negative children. Of the clinical signs not included in the WHO case definition, marasmus and hepatosplenomegaly especially occurred more frequently in HIV-positive children. Based on these findings we composed a new case definition consisting of four signs: marasmus, hepatosplenomegaly, oropharyngeal candidiasis, and generalized lymphadenopathy. HIV infection is suspected in a child presenting with at least two of these four signs. The sensitivity of this case definition was 63.2 per cent, the specificity was 96.0 per cent. We conclude that in this study the WHO case definition was not a useful instrument to discriminate between HIV-positive and HIV-negative children, mainly because its sensitivity was strikingly low. The simplified case definition we propose, proved to be more sensitive than the WHO case definition (63.2 vs. 14.5 per cent), whilst its specificity remained high.

  12. Validation of treatment escalation as a definition of atopic eczema flares.

    Directory of Open Access Journals (Sweden)

    Kim S Thomas

    Full Text Available Atopic eczema (AE is a chronic disease with flares and remissions. Long-term control of AE flares has been identified as a core outcome domain for AE trials. However, it is unclear how flares should be defined and measured.To validate two concepts of AE flares based on daily reports of topical medication use: (i escalation of treatment and (ii days of topical anti-inflammatory medication use (topical corticosteroids and/or calcineurin inhibitors.Data from two published AE studies (studies A (n=336 and B (n=60 were analysed separately. Validity and feasibility of flare definitions were assessed using daily global bother (scale 0 to 10 as the reference standard. Intra-class correlations were reported for continuous variables, and odds ratios and area under the receiver operator characteristic (ROC curve for binary outcome measures.Good agreement was found between both AE flare definitions and change in global bother: area under the ROC curve for treatment escalation of 0.70 and 0.73 in studies A and B respectively, and area under the ROC curve of 0.69 for topical anti-inflammatory medication use (Study A only. Significant positive relationships were found between validated severity scales (POEM, SASSAD, TIS and the duration of AE flares occurring in the previous week - POEM and SASSAD rose by half a point for each unit increase in number of days in flare. Smaller increases were observed on the TIS scale. Completeness of daily diaries was 95% for Study A and 60% for Study B over 16 weeks.Both definitions were good proxy indicators of AE flares. We found no evidence that 'escalation of treatment' was a better measure of AE flares than 'use of topical anti-inflammatory medications'. Capturing disease flares in AE trials through daily recording of medication use is feasible and appears to be a good indicator of long-term control.Current Controlled Trials ISRCTN71423189 (Study A.

  13. Claims-Based Definition of Death in Japanese Claims Database: Validity and Implications

    Science.gov (United States)

    Ooba, Nobuhiro; Setoguchi, Soko; Ando, Takashi; Sato, Tsugumichi; Yamaguchi, Takuhiro; Mochizuki, Mayumi; Kubota, Kiyoshi

    2013-01-01

    Background For the pending National Claims Database in Japan, researchers will not have access to death information in the enrollment files. We developed and evaluated a claims-based definition of death. Methodology/Principal Findings We used healthcare claims and enrollment data between January 2005 and August 2009 for 195,193 beneficiaries aged 20 to 74 in 3 private health insurance unions. We developed claims-based definitions of death using discharge or disease status and Charlson comorbidity index (CCI). We calculated sensitivity, specificity and positive predictive values (PPVs) using the enrollment data as a gold standard in the overall population and subgroups divided by demographic and other factors. We also assessed bias and precision in two example studies where an outcome was death. The definition based on the combination of discharge/disease status and CCI provided moderate sensitivity (around 60%) and high specificity (99.99%) and high PPVs (94.8%). In most subgroups, sensitivity of the preferred definition was also around 60% but varied from 28 to 91%. In an example study comparing death rates between two anticancer drug classes, the claims-based definition provided valid and precise hazard ratios (HRs). In another example study comparing two classes of anti-depressants, the HR with the claims-based definition was biased and had lower precision than that with the gold standard definition. Conclusions/Significance The claims-based definitions of death developed in this study had high specificity and PPVs while sensitivity was around 60%. The definitions will be useful in future studies when used with attention to the possible fluctuation of sensitivity in some subpopulations. PMID:23741526

  14. Ecodriver. D23.1: Report on test scenarios for val-idation of on-line vehicle algorithms

    NARCIS (Netherlands)

    Seewald, P.; Ivens, T.W.T.; Spronkmans, S.

    2014-01-01

    This deliverable provides a description of test scenarios that will be used for validation of WP22’s on-line vehicle algorithms. These algorithms consist of the two modules VE³ (Vehicle Energy and Environment Estimator) and RSG (Reference Signal Genera-tor) and will be tested using the

  15. Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?

    Science.gov (United States)

    Goodwin, Laura D.; Goodwin, William L.

    1984-01-01

    The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…

  16. Dosimetric validation of the anisotropic analytical algorithm for photon dose calculation: fundamental characterization in water

    International Nuclear Information System (INIS)

    Fogliata, Antonella; Nicolini, Giorgia; Vanetti, Eugenio; Clivio, Alessandro; Cozzi, Luca

    2006-01-01

    In July 2005 a new algorithm was released by Varian Medical Systems for the Eclipse planning system and installed in our institute. It is the anisotropic analytical algorithm (AAA) for photon dose calculations, a convolution/superposition model for the first time implemented in a Varian planning system. It was therefore necessary to perform validation studies at different levels with a wide investigation approach. To validate the basic performances of the AAA, a detailed analysis of data computed by the AAA configuration algorithm was carried out and data were compared against measurements. To better appraise the performance of AAA and the capability of its configuration to tailor machine-specific characteristics, data obtained from the pencil beam convolution (PBC) algorithm implemented in Eclipse were also added in the comparison. Since the purpose of the paper is to address the basic performances of the AAA and of its configuration procedures, only data relative to measurements in water will be reported. Validation was carried out for three beams: 6 MV and 15 MV from a Clinac 2100C/D and 6 MV from a Clinac 6EX. Generally AAA calculations reproduced very well measured data, and small deviations were observed, on average, for all the quantities investigated for open and wedged fields. In particular, percentage depth-dose curves showed on average differences between calculation and measurement smaller than 1% or 1 mm, and computed profiles in the flattened region matched measurements with deviations smaller than 1% for all beams, field sizes, depths and wedges. Percentage differences in output factors were observed as small as 1% on average (with a range smaller than ±2%) for all conditions. Additional tests were carried out for enhanced dynamic wedges with results comparable to previous results. The basic dosimetric validation of the AAA was therefore considered satisfactory

  17. Surveillance case definitions for work related upper limb pain syndromes

    OpenAIRE

    Harrington, J. M.; Carter, J. T.; Birrell, L.; Gompertz, D.

    1998-01-01

    OBJECTIVES: To establish consensus case definitions for several common work related upper limb pain syndromes for use in surveillance or studies of the aetiology of these conditions. METHODS: A group of healthcare professionals from the disciplines interested in the prevention and management of upper limb disorders were recruited for a Delphi exercise. A questionnaire was used to establish case definitions from the participants, followed by a consensus conference involving the core grou...

  18. Cross-Cultural Validation of the Definition of Multimorbidity in the Bulgarian Language.

    Science.gov (United States)

    Assenova, Radost S; Le Reste, Jean Yves; Foreva, Gergana H; Mileva, Daniela S; Czachowski, Slawomir; Sowinska, Agnieszka; Nabbe, Patrice; Argyriadou, Stella; Lazic, Djurdjica; Hasaganic, Melida; Lingner, Heidrun; Lygidakis, Harris; Muñoz, Miguel-Angel; Claveria, Ana; Doerr, Chista; Van Marwijk, Harm; Van Royen, Paul; Lietard, Claire

    2015-01-01

    Multimorbidity is a health issue with growing importance. During the last few decades the populations of most countries in the world have been ageing rapidly. Bulgaria is affected by the issue because of the high prevalence of ageing population in the country with multiple chronic conditions. The AIM of the present study was to validate the translated definition of multimorbidity from English into the Bulgarian language. The present study is part of an international project involving 8 national groups. We performed a forward and backward translation of the original English definition of multimorbidity using a Delphi consensus procedure. The physicians involved accepted the definition with a high percentage of agreement in the first round. The backward translation was accepted by the scientific committee using the Nominal group technique. Some of the GPs provided comments on the linguistic expressions which arose in order to improve understanding in Bulgarian. The remarks were not relevant to the content. The conclusion of the discussion, using a meta-ethnographic approach, was that the differences were acceptable and no further changes were required. A native version of the published English multimorbidity definition has been finalized. This definition is a prerequisite for better management of multimorbidity by clinicians, researchers and policy makers.

  19. Comparison of different criteria for periodontitis case definition in head and neck cancer individuals.

    Science.gov (United States)

    Bueno, Audrey Cristina; Ferreira, Raquel Conceição; Cota, Luis Otávio Miranda; Silva, Guilherme Carvalho; Magalhães, Cláudia Silami; Moreira, Allyson Nogueira

    2015-09-01

    Different periodontitis case definitions have been used in clinical research and epidemiology. The aim of this study was to determine more accurate criterion for the definition of mild and moderate periodontitis case to be applied to head and neck cancer individuals before radiotherapy. The frequency of periodontitis in a sample of 84 individuals was determined according to different diagnostic criteria: (1) Lopez et al. (2002);(2) Hujoel et al. (2006); (3) Beck et al. (1990); (4) Machtei et al. (1992); (5) Tonetti and Claffey (2005); (6) and Page and Eke (2007). All diagnosis were based on the clinical parameters obtained by a single calibrated examiner (Kw = 0.71). The individuals were evaluated before radiotherapy. They received oral hygiene instructions, and the cases diagnosed with periodontitis (Page and Eke 2007) were treated. The gold standard was the definition 6, and the others were compared by means of agreement, sensitivity (SS), specificity (SP), and the area under ROC curve. The kappa test evaluated the agreement between definitions. The frequency of periodontitis at baseline was 53.6 % (definition 1), 81.0 % (definition 2), 40.5 % (definition 3), 26.2 % (definition 4), 13.1 % (definition 5), and 70.2 % (definition 6). The kappa test showed a moderate agreement between definitions 6 and 2 (59.0 %) and definitions 6 and 1 (56.0 %). The criterion with higher SS (0.92) and SP (0.73) was definition 1. Definition 1 was the most accurate criterion to case periodontitis definition to be applied to head and neck cancer individuals.

  20. Chiari malformation Type I surgery in pediatric patients. Part 1: validation of an ICD-9-CM code search algorithm.

    Science.gov (United States)

    Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2016-05-01

    OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes.

  1. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  2. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  3. Surveillance of paediatric traumatic brain injuries using the NEISS: choosing an appropriate case definition.

    Science.gov (United States)

    Thompson, Meghan C; Wheeler, Krista K; Shi, Junxin; Smith, Gary A; Groner, Jonathan I; Haley, Kathryn J; Xiang, Huiyun

    2014-01-01

    To evaluate the definition of traumatic brain injury (TBI) in the National Electronic Injury Surveillance System (NEISS) and compare TBI case ascertainment using NEISS vs. ICD-9-CM diagnosis coding. Two data samples from a NEISS participating emergency department (ED) in 2008 were compared: (1) NEISS records meeting the recommended NEISS TBI definition and (2) Hospital ED records meeting the ICD-9-CM CDC recommended TBI definition. The sensitivity and positive predictive value were calculated for the NEISS definition using the ICD-9-CM definition as the gold standard. Further analyses were performed to describe cases characterized as TBIs in both datasets and to determine why some cases were not classified as TBIs in both datasets. There were 1834 TBI cases captured by the NEISS and 1836 TBI cases captured by the ICD-9-CM coded ED record, but only 1542 were eligible for inclusion in NEISS. There were 1403 cases classified as TBIs by both the NEISS and ICD-9-CM diagnosis codes. The NEISS TBI definition had a sensitivity of 91.0% (95% CI = 89.6-92.4%) and positive predictive value of 76.5% (95% CI = 74.6-78.4%). Using the NEISS TBI definition presented in this paper would standardize and improve the accuracy of TBI research using the NEISS.

  4. Derivation and validation of the automated search algorithms to identify cognitive impairment and dementia in electronic health records.

    Science.gov (United States)

    Amra, Sakusic; O'Horo, John C; Singh, Tarun D; Wilson, Gregory A; Kashyap, Rahul; Petersen, Ronald; Roberts, Rosebud O; Fryer, John D; Rabinstein, Alejandro A; Gajic, Ognjen

    2017-02-01

    Long-term cognitive impairment is a common and important problem in survivors of critical illness. We developed electronic search algorithms to identify cognitive impairment and dementia from the electronic medical records (EMRs) that provide opportunity for big data analysis. Eligible patients met 2 criteria. First, they had a formal cognitive evaluation by The Mayo Clinic Study of Aging. Second, they were hospitalized in intensive care unit at our institution between 2006 and 2014. The "criterion standard" for diagnosis was formal cognitive evaluation supplemented by input from an expert neurologist. Using all available EMR data, we developed and improved our algorithms in the derivation cohort and validated them in the independent validation cohort. Of 993 participants who underwent formal cognitive testing and were hospitalized in intensive care unit, we selected 151 participants at random to form the derivation and validation cohorts. The automated electronic search algorithm for cognitive impairment was 94.3% sensitive and 93.0% specific. The search algorithms for dementia achieved respective sensitivity and specificity of 97% and 99%. EMR search algorithms significantly outperformed International Classification of Diseases codes. Automated EMR data extractions for cognitive impairment and dementia are reliable and accurate and can serve as acceptable and efficient alternatives to time-consuming manual data review. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Validity and reliability of three definitions of hip osteoarthritis: cross sectional and longitudinal approach

    NARCIS (Netherlands)

    M. Reijman (Max); J.M.W. Hazes (Mieke); H.A.P. Pols (Huib); R.M.D. Bernsen (Roos); B.W. Koes (Bart); S.M. Bierma-Zeinstra (Sita)

    2004-01-01

    textabstractOBJECTIVES: To compare the reliability and validity in a large open population of three frequently used radiological definitions of hip osteoarthritis (OA): Kellgren and Lawrence grade, minimal joint space (MJS), and Croft grade; and to investigate whether the

  6. Quantitative validation of a new coregistration algorithm

    International Nuclear Information System (INIS)

    Pickar, R.D.; Esser, P.D.; Pozniakoff, T.A.; Van Heertum, R.L.; Stoddart, H.A. Jr.

    1995-01-01

    A new coregistration software package, Neuro9OO Image Coregistration software, has been developed specifically for nuclear medicine. With this algorithm, the correlation coefficient is maximized between volumes generated from sets of transaxial slices. No localization markers or segmented surfaces are needed. The coregistration program was evaluated for translational and rotational registration accuracy. A Tc-99m HM-PAO split-dose study (0.53 mCi low dose, L, and 1.01 mCi high dose, H) was simulated with a Hoffman Brain Phantom with five fiducial markers. Translation error was determined by a shift in image centroid, and rotation error was determined by a simplified two-axis approach. Changes in registration accuracy were measured with respect to: (1) slice spacing, using the four different combinations LL, LH, HL, HH, (2) translational and rotational misalignment before coregistration, (3) changes in the step size of the iterative parameters. In all the cases the algorithm converged with only small difference in translation offset, 0 and 0. At 6 nun slice spacing, translational efforts ranged from 0.9 to 2.8 mm (system resolution at 100 mm, 6.8 mm). The converged parameters showed little sensitivity to count density. In addition the correlation coefficient increased with decreasing iterative step size, as expected. From these experiments, the authors found that this algorithm based on the maximization of the correlation coefficient between studies was an accurate way to coregister SPECT brain images

  7. Case definition for Ebola and Marburg haemorrhagic fevers: a complex challenge for epidemiologists and clinicians.

    Science.gov (United States)

    Pittalis, Silvia; Fusco, Francesco Maria; Lanini, Simone; Nisii, Carla; Puro, Vincenzo; Lauria, Francesco Nicola; Ippolito, Giuseppe

    2009-10-01

    Viral haemorrhagic fevers (VHFs) represent a challenge for public health because of their epidemic potential, and their possible use as bioterrorism agents poses particular concern. In 1999 the World Health Organization (WHO) proposed a case definition for VHFs, subsequently adopted by other international institutions with the aim of early detection of initial cases/outbreaks in western countries. We applied this case definition to reports of Ebola and Marburg virus infections to estimate its sensitivity to detect cases of the disease. We analyzed clinical descriptions of 795 reported cases of Ebola haemorrhagic fever: only 58.5% of patients met the proposed case definition. A similar figure was obtained reviewing 169 cases of Marburg diseases, of which only 64.5% were in accordance with the case definition. In conclusion, the WHO case definition for hemorrhagic fevers is too specific and has poor sensitivity both for case finding during Ebola or Marburg outbreaks, and for early detection of suspected cases in western countries. It can lead to a hazardous number of false negatives and its use should be discouraged for early detection of cases.

  8. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW

  9. Implementation of Chaid Algorithm: A Hotel Case

    Directory of Open Access Journals (Sweden)

    Celal Hakan Kagnicioglu

    2016-01-01

    Full Text Available Today, companies are planning their own activities depending on efficiency and effectiveness. In order to have plans for the future activities they need historical data coming from outside and inside of the companies. However, this data is in huge amounts to understand easily. Since, this huge amount of data creates complexity in business for many industries like hospitality industry, reliable, accurate and fast access to this data is to be one of the greatest problems. Besides, management of this data is another big problem. In order to analyze this huge amount of data, Data Mining (DM tools, can be used effectively. In this study, after giving brief definition about fundamentals of data mining, Chi Squared Automatic Interaction Detection (CHAID algorithm, one of the mostly used DM tool, will be introduced. By CHAID algorithm, the most used materials in room cleaning process and the relations of these materials based on in a five star hotel data are tried to be determined. At the end of the analysis, it is seen that while some variables have strong relation with the number of rooms cleaned in the hotel, the others have no or weak relation.

  10. Implementation of Chaid Algorithm: A Hotel Case

    Directory of Open Access Journals (Sweden)

    Celal Hakan Kağnicioğlu

    2014-11-01

    Full Text Available Today, companies are planning their own activities depending on efficiency and effectiveness. In order to have plans for the future activities they need historical data coming from outside and inside of the companies. However, this data is in huge amounts to understand easily. Since, this huge amount of data creates complexity in business for many industries like hospitality industry, reliable, accurate and fast access to this data is to be one of the greatest problems. Besides, management of this data is another big problem. In order to analyze this huge amount of data, Data Mining (DM tools, can be used effectively. In this study, after giving brief definition about fundamentals of data mining, Chi Squared Automatic Interaction Detection (CHAID algorithm, one of the mostly used DM tool, will be introduced. By CHAID algorithm, the most used materials in room cleaning process and the relations of these materials based on in a five star hotel data are tried to be determined. At the end of the analysis, it is seen that while some variables have strong relation with the number of rooms cleaned in the hotel, the others have no or weak relation.

  11. Assessment of severe malaria in a multicenter, phase III, RTS, S/AS01 malaria candidate vaccine trial: case definition, standardization of data collection and patient care.

    Science.gov (United States)

    Vekemans, Johan; Marsh, Kevin; Greenwood, Brian; Leach, Amanda; Kabore, William; Soulanoudjingar, Solange; Asante, Kwaku Poku; Ansong, Daniel; Evans, Jennifer; Sacarlal, Jahit; Bejon, Philip; Kamthunzi, Portia; Salim, Nahya; Njuguna, Patricia; Hamel, Mary J; Otieno, Walter; Gesase, Samwel; Schellenberg, David

    2011-08-04

    An effective malaria vaccine, deployed in conjunction with other malaria interventions, is likely to substantially reduce the malaria burden. Efficacy against severe malaria will be a key driver for decisions on implementation. An initial study of an RTS, S vaccine candidate showed promising efficacy against severe malaria in children in Mozambique. Further evidence of its protective efficacy will be gained in a pivotal, multi-centre, phase III study. This paper describes the case definitions of severe malaria used in this study and the programme for standardized assessment of severe malaria according to the case definition. Case definitions of severe malaria were developed from a literature review and a consensus meeting of expert consultants and the RTS, S Clinical Trial Partnership Committee, in collaboration with the World Health Organization and the Malaria Clinical Trials Alliance. The same groups, with input from an Independent Data Monitoring Committee, developed and implemented a programme for standardized data collection.The case definitions developed reflect the typical presentations of severe malaria in African hospitals. Markers of disease severity were chosen on the basis of their association with poor outcome, occurrence in a significant proportion of cases and on an ability to standardize their measurement across research centres. For the primary case definition, one or more clinical and/or laboratory markers of disease severity have to be present, four major co-morbidities (pneumonia, meningitis, bacteraemia or gastroenteritis with severe dehydration) are excluded, and a Plasmodium falciparum parasite density threshold is introduced, in order to maximize the specificity of the case definition. Secondary case definitions allow inclusion of co-morbidities and/or allow for the presence of parasitaemia at any density. The programmatic implementation of standardized case assessment included a clinical algorithm for evaluating seriously sick children

  12. Clinical validation of a body-fixed 3D accelerometer and algorithm for activity monitoring in orthopaedic patients

    Directory of Open Access Journals (Sweden)

    Matthijs Lipperts

    2017-10-01

    Conclusion: Activity monitoring of orthopaedic patients by counting and timing a large set of relevant daily life events is feasible in a user- and patient-friendly way and at high clinical validity using a generic three-dimensional accelerometer and algorithms based on empirical and physical methods. The algorithms performed well for healthy individuals as well as patients recovering after total joint replacement in a challenging validation set-up. With such a simple and transparent method real-life activity parameters can be collected in orthopaedic practice for diagnostics, treatments, outcome assessment, or biofeedback.

  13. Data Mining: Comparing the Empiric CFS to the Canadian ME/CFS Case Definition

    OpenAIRE

    Jason, Leonard A.; Skendrovic, Beth; Furst, Jacob; Brown, Abigail; Weng, Angela; Bronikowski, Christine

    2011-01-01

    This article contrasts two case definitions for Myalgic Encephalomyelitis/chronic fatigue syndrome (ME/CFS). We compared the empiric CFS case definition (Reeves et al., 2005) and the Canadian ME/CFS Clinical case definition (Carruthers et al., 2003) with a sample of individuals with CFS versus those without. Data mining with decision trees was used to identify the best items to identify patients with CFS. Data mining is a statistical technique that was used to help determine which of the surv...

  14. Case studies can support definitions of workplace innovation in practice

    NARCIS (Netherlands)

    Vaas, F.; Žiauberytė-Jakštienė, R.; Oeij, P.R.

    2017-01-01

    Many practitioners find it problematic to understand and describe workplace innovation (WPI). Whereas there are well-known definitions of WPI, these remain highly abstract. We argue that, for practitioners, case examples of WPI best practices can be a valuable addition to these definitions. In this

  15. Update of the Case Definitions for Population-Based Surveillance of Periodontitis

    Science.gov (United States)

    Eke, Paul I.; Page, Roy C.; Wei, Liang; Thornton-Evans, Gina; Genco, Robert J.

    2018-01-01

    Background This report adds a new definition for mild periodontitis that allows for better descriptions of the overall prevalence of periodontitis in populations. In 2007, the Centers for Disease Control and Prevention in partnership with the American Academy of Periodontology developed and reported standard case definitions for surveillance of moderate and severe periodontitis based on measurements of probing depth (PD) and clinical attachment loss (AL) at interproximal sites. However, combined cases of moderate and severe periodontitis are insufficient to determine the total prevalence of periodontitis in populations. Methods The authors proposed a definition for mild periodontitis as ≥2 interproximal sites with AL ≥3 mm and ≥2 interproximal sites with PD ≥4 mm (not on the same tooth) or one site with PD ≥5 mm. The effect of the proposed definition on the total burden of periodontitis was assessed in a convenience sample of 456 adults ≥35 years old and compared with other previously reported definitions for similar categories of periodontitis. Results Addition of mild periodontitis increases the total prevalence of periodontitis by ≈31% in this sample when compared with the prevalence of severe and moderate disease. Conclusion Total periodontitis using the case definitions in this study should be based on the sum of mild, moderate, and severe periodontitis. PMID:22420873

  16. A Systematic Review of Validated Methods for Identifying Cerebrovascular Accident or Transient Ischemic Attack Using Administrative Data

    Science.gov (United States)

    Andrade, Susan E.; Harrold, Leslie R.; Tjia, Jennifer; Cutrona, Sarah L.; Saczynski, Jane S.; Dodd, Katherine S.; Goldberg, Robert J.; Gurwitz, Jerry H.

    2012-01-01

    Purpose To perform a systematic review of the validity of algorithms for identifying cerebrovascular accidents (CVAs) or transient ischemic attacks (TIAs) using administrative and claims data. Methods PubMed and Iowa Drug Information Service (IDIS) searches of the English language literature were performed to identify studies published between 1990 and 2010 that evaluated the validity of algorithms for identifying CVAs (ischemic and hemorrhagic strokes, intracranial hemorrhage and subarachnoid hemorrhage) and/or TIAs in administrative data. Two study investigators independently reviewed the abstracts and articles to determine relevant studies according to pre-specified criteria. Results A total of 35 articles met the criteria for evaluation. Of these, 26 articles provided data to evaluate the validity of stroke, 7 reported the validity of TIA, 5 reported the validity of intracranial bleeds (intracerebral hemorrhage and subarachnoid hemorrhage), and 10 studies reported the validity of algorithms to identify the composite endpoints of stroke/TIA or cerebrovascular disease. Positive predictive values (PPVs) varied depending on the specific outcomes and algorithms evaluated. Specific algorithms to evaluate the presence of stroke and intracranial bleeds were found to have high PPVs (80% or greater). Algorithms to evaluate TIAs in adult populations were generally found to have PPVs of 70% or greater. Conclusions The algorithms and definitions to identify CVAs and TIAs using administrative and claims data differ greatly in the published literature. The choice of the algorithm employed should be determined by the stroke subtype of interest. PMID:22262598

  17. Algorithm for Wireless Sensor Networks Based on Grid Management

    Directory of Open Access Journals (Sweden)

    Geng Zhang

    2014-05-01

    Full Text Available This paper analyzes the key issues for wireless sensor network trust model and describes a method to build a wireless sensor network, such as the definition of trust for wireless sensor networks, computing and credibility of trust model application. And for the problem that nodes are vulnerable to attack, this paper proposed a grid-based trust algorithm by deep exploration trust model within the framework of credit management. Algorithm for node reliability screening and rotation schedule to cover parallel manner based on the implementation of the nodes within the area covered by trust. And analyze the results of the size of trust threshold has great influence on the safety and quality of coverage throughout the coverage area. The simulation tests the validity and correctness of the algorithm.

  18. Appendix F. Developmental enforcement algorithm definition document : predictive braking enforcement algorithm definition document.

    Science.gov (United States)

    2012-05-01

    The purpose of this document is to fully define and describe the logic flow and mathematical equations for a predictive braking enforcement algorithm intended for implementation in a Positive Train Control (PTC) system.

  19. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  20. Detecting free-living steps and walking bouts: validating an algorithm for macro gait analysis.

    Science.gov (United States)

    Hickey, Aodhán; Del Din, Silvia; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Research suggests wearables and not instrumented walkways are better suited to quantify gait outcomes in clinic and free-living environments, providing a more comprehensive overview of walking due to continuous monitoring. Numerous validation studies in controlled settings exist, but few have examined the validity of wearables and associated algorithms for identifying and quantifying step counts and walking bouts in uncontrolled (free-living) environments. Studies which have examined free-living step and bout count validity found limited agreement due to variations in walking speed, changing terrain or task. Here we present a gait segmentation algorithm to define free-living step count and walking bouts from an open-source, high-resolution, accelerometer-based wearable (AX3, Axivity). Ten healthy participants (20-33 years) wore two portable gait measurement systems; a wearable accelerometer on the lower-back and a wearable body-mounted camera (GoPro HERO) on the chest, for 1 h on two separate occasions (24 h apart) during free-living activities. Step count and walking bouts were derived for both measurement systems and compared. For all participants during a total of almost 20 h of uncontrolled and unscripted free-living activity data, excellent relative (rho  ⩾  0.941) and absolute (ICC (2,1)   ⩾  0.975) agreement with no presence of bias were identified for step count compared to the camera (gold standard reference). Walking bout identification showed excellent relative (rho  ⩾  0.909) and absolute agreement (ICC (2,1)   ⩾  0.941) but demonstrated significant bias. The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.

  1. Indications for spine surgery: validation of an administrative coding algorithm to classify degenerative diagnoses

    Science.gov (United States)

    Lurie, Jon D.; Tosteson, Anna N.A.; Deyo, Richard A.; Tosteson, Tor; Weinstein, James; Mirza, Sohail K.

    2014-01-01

    Study Design Retrospective analysis of Medicare claims linked to a multi-center clinical trial. Objective The Spine Patient Outcomes Research Trial (SPORT) provided a unique opportunity to examine the validity of a claims-based algorithm for grouping patients by surgical indication. SPORT enrolled patients for lumbar disc herniation, spinal stenosis, and degenerative spondylolisthesis. We compared the surgical indication derived from Medicare claims to that provided by SPORT surgeons, the “gold standard”. Summary of Background Data Administrative data are frequently used to report procedure rates, surgical safety outcomes, and costs in the management of spinal surgery. However, the accuracy of using diagnosis codes to classify patients by surgical indication has not been examined. Methods Medicare claims were link to beneficiaries enrolled in SPORT. The sensitivity and specificity of three claims-based approaches to group patients based on surgical indications were examined: 1) using the first listed diagnosis; 2) using all diagnoses independently; and 3) using a diagnosis hierarchy based on the support for fusion surgery. Results Medicare claims were obtained from 376 SPORT participants, including 21 with disc herniation, 183 with spinal stenosis, and 172 with degenerative spondylolisthesis. The hierarchical coding algorithm was the most accurate approach for classifying patients by surgical indication, with sensitivities of 76.2%, 88.1%, and 84.3% for disc herniation, spinal stenosis, and degenerative spondylolisthesis cohorts, respectively. The specificity was 98.3% for disc herniation, 83.2% for spinal stenosis, and 90.7% for degenerative spondylolisthesis. Misclassifications were primarily due to codes attributing more complex pathology to the case. Conclusion Standardized approaches for using claims data to accurately group patients by surgical indications has widespread interest. We found that a hierarchical coding approach correctly classified over 90

  2. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  3. Definition and initial validation of a Lupus Low Disease Activity State (LLDAS).

    Science.gov (United States)

    Franklyn, Kate; Lau, Chak Sing; Navarra, Sandra V; Louthrenoo, Worawit; Lateef, Aisha; Hamijoyo, Laniyati; Wahono, C Singgih; Chen, Shun Le; Jin, Ou; Morton, Susan; Hoi, Alberta; Huq, Molla; Nikpour, Mandana; Morand, Eric F

    2016-09-01

    Treating to low disease activity is routine in rheumatoid arthritis, but no comparable goal has been defined for systemic lupus erythematosus (SLE). We sought to define and validate a Lupus Low Disease Activity State (LLDAS). A consensus definition of LLDAS was generated using Delphi and nominal group techniques. Criterion validity was determined by measuring the ability of LLDAS attainment, in a single-centre SLE cohort, to predict non-accrual of irreversible organ damage, measured using the Systemic Lupus International Collaborating Clinics Damage Index (SDI). Consensus methodology led to the following definition of LLDAS: (1) SLE Disease Activity Index (SLEDAI)-2K ≤4, with no activity in major organ systems (renal, central nervous system (CNS), cardiopulmonary, vasculitis, fever) and no haemolytic anaemia or gastrointestinal activity; (2) no new lupus disease activity compared with the previous assessment; (3) a Safety of Estrogens in Lupus Erythematosus National Assessment (SELENA)-SLEDAI physician global assessment (scale 0-3) ≤1; (4) a current prednisolone (or equivalent) dose ≤7.5 mg daily; and (5) well tolerated standard maintenance doses of immunosuppressive drugs and approved biological agents. Achievement of LLDAS was determined in 191 patients followed for a mean of 3.9 years. Patients who spent greater than 50% of their observed time in LLDAS had significantly reduced organ damage accrual compared with patients who spent less than 50% of their time in LLDAS (p=0.0007) and were significantly less likely to have an increase in SDI of ≥1 (relative risk 0.47, 95% CI 0.28 to 0.79, p=0.005). A definition of LLDAS has been generated, and preliminary validation demonstrates its attainment to be associated with improved outcomes in SLE. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S., E-mail: s.wognum@gmail.com; Heethuis, S. E.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Rosario, T. [Department of Radiation Oncology, VU University Medical Center, De Boelelaan 1117, 1081 HZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2014-07-15

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  5. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers.

    Science.gov (United States)

    Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A

    2014-07-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on

  6. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    International Nuclear Information System (INIS)

    Wognum, S.; Heethuis, S. E.; Bel, A.; Rosario, T.; Hoogeman, M. S.

    2014-01-01

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  7. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  8. Development of a Surveillance Definition for United States-Mexico Binational Cases of Tuberculosis.

    Science.gov (United States)

    Woodruff, Rachel S Yelk; Miner, Mark C; Miramontes, Roque

    Consistently collected binational surveillance data are important in advocating for resources to manage and treat binational cases of tuberculosis (TB). The objective of this study was to develop a surveillance definition for binational (United States-Mexico) cases of TB to assess the burden on US TB program resources. We collaborated with state and local TB program staff members in the United States to identify characteristics associated with binational cases of TB. We collected data on all cases of TB from 9 pilot sites in 5 states (Arizona, California, Colorado, New Mexico, and Texas) during January 1-June 30, 2014, that had at least 1 binational characteristic (eg, "crossed border while on TB treatment" and "received treatment in another country, coordinated by an established, US-funded, binational TB program"). A workgroup of US state, local, and federal partners reviewed results and used them to develop a practical surveillance definition. The pilot sites reported 87 cases of TB with at least 1 binational characteristic during the project period. The workgroup drafted a proposed surveillance definition to include 2 binational characteristics: "crossed border while on TB treatment" (34 of 87 cases, 39%) and "received treatment in another country, coordinated by an established, US-funded, binational TB program" (26 of 87 cases, 30%). Applying the new proposed definition, 39 of 87 pilot cases of TB (45%) met the definition of binational. Input from partners who were responsible for the care and treatment of patients who cross the United States-Mexico border was crucial in defining a binational case of TB.

  9. Validation and Application of the Modified Satellite-Based Priestley-Taylor Algorithm for Mapping Terrestrial Evapotranspiration

    Directory of Open Access Journals (Sweden)

    Yunjun Yao

    2014-01-01

    Full Text Available Satellite-based vegetation indices (VIs and Apparent Thermal Inertia (ATI derived from temperature change provide valuable information for estimating evapotranspiration (LE and detecting the onset and severity of drought. The modified satellite-based Priestley-Taylor (MS-PT algorithm that we developed earlier, coupling both VI and ATI, is validated based on observed data from 40 flux towers distributed across the world on all continents. The validation results illustrate that the daily LE can be estimated with the Root Mean Square Error (RMSE varying from 10.7 W/m2 to 87.6 W/m2, and with the square of correlation coefficient (R2 from 0.41 to 0.89 (p < 0.01. Compared with the Priestley-Taylor-based LE (PT-JPL algorithm, the MS-PT algorithm improves the LE estimates at most flux tower sites. Importantly, the MS-PT algorithm is also satisfactory in reproducing the inter-annual variability at flux tower sites with at least five years of data. The R2 between measured and predicted annual LE anomalies is 0.42 (p = 0.02. The MS-PT algorithm is then applied to detect the variations of long-term terrestrial LE over Three-North Shelter Forest Region of China and to monitor global land surface drought. The MS-PT algorithm described here demonstrates the ability to map regional terrestrial LE and identify global soil moisture stress, without requiring precipitation information.

  10. An Ontology to Improve Transparency in Case Definition and Increase Case Finding of Infectious Intestinal Disease: Database Study in English General Practice.

    Science.gov (United States)

    de Lusignan, Simon; Shinneman, Stacy; Yonova, Ivelina; van Vlymen, Jeremy; Elliot, Alex J; Bolton, Frederick; Smith, Gillian E; O'Brien, Sarah

    2017-09-28

    Infectious intestinal disease (IID) has considerable health impact; there are 2 billion cases worldwide resulting in 1 million deaths and 78.7 million disability-adjusted life years lost. Reported IID incidence rates vary and this is partly because terms such as "diarrheal disease" and "acute infectious gastroenteritis" are used interchangeably. Ontologies provide a method of transparently comparing case definitions and disease incidence rates. This study sought to show how differences in case definition in part account for variation in incidence estimates for IID and how an ontological approach provides greater transparency to IID case finding. We compared three IID case definitions: (1) Royal College of General Practitioners Research and Surveillance Centre (RCGP RSC) definition based on mapping to the Ninth International Classification of Disease (ICD-9), (2) newer ICD-10 definition, and (3) ontological case definition. We calculated incidence rates and examined the contribution of four supporting concepts related to IID: symptoms, investigations, process of care (eg, notification to public health authorities), and therapies. We created a formal ontology using ontology Web language. The ontological approach identified 5712 more cases of IID than the ICD-10 definition and 4482 more than the RCGP RSC definition from an initial cohort of 1,120,490. Weekly incidence using the ontological definition was 17.93/100,000 (95% CI 15.63-20.41), whereas for the ICD-10 definition the rate was 8.13/100,000 (95% CI 6.70-9.87), and for the RSC definition the rate was 10.24/100,000 (95% CI 8.55-12.12). Codes from the four supporting concepts were generally consistent across our three IID case definitions: 37.38% (3905/10,448) (95% CI 36.16-38.5) for the ontological definition, 38.33% (2287/5966) (95% CI 36.79-39.93) for the RSC definition, and 40.82% (1933/4736) (95% CI 39.03-42.66) for the ICD-10 definition. The proportion of laboratory results associated with a positive test

  11. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  12. The semianalytical cloud retrieval algorithm for SCIAMACHY I. The validation

    Directory of Open Access Journals (Sweden)

    A. A. Kokhanovsky

    2006-01-01

    Full Text Available A recently developed cloud retrieval algorithm for the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY is briefly presented and validated using independent and well tested cloud retrieval techniques based on the look-up-table approach for MODeration resolutIon Spectrometer (MODIS data. The results of the cloud top height retrievals using measurements in the oxygen A-band by an airborne crossed Czerny-Turner spectrograph and the Global Ozone Monitoring Experiment (GOME instrument are compared with those obtained from airborne dual photography and retrievals using data from Along Track Scanning Radiometer (ATSR-2, respectively.

  13. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  14. Accuracy of Zika virus disease case definition during simultaneous Dengue and Chikungunya epidemics.

    Science.gov (United States)

    Braga, José Ueleres; Bressan, Clarisse; Dalvi, Ana Paula Razal; Calvet, Guilherme Amaral; Daumas, Regina Paiva; Rodrigues, Nadia; Wakimoto, Mayumi; Nogueira, Rita Maria Ribeiro; Nielsen-Saines, Karin; Brito, Carlos; Bispo de Filippis, Ana Maria; Brasil, Patrícia

    2017-01-01

    Zika is a new disease in the American continent and its surveillance is of utmost importance, especially because of its ability to cause neurological manifestations as Guillain-Barré syndrome and serious congenital malformations through vertical transmission. The detection of suspected cases by the surveillance system depends on the case definition adopted. As the laboratory diagnosis of Zika infection still relies on the use of expensive and complex molecular techniques with low sensitivity due to a narrow window of detection, most suspected cases are not confirmed by laboratory tests, mainly reserved for pregnant women and newborns. In this context, an accurate definition of a suspected Zika case is crucial in order for the surveillance system to gauge the magnitude of an epidemic. We evaluated the accuracy of various Zika case definitions in a scenario where Dengue and Chikungunya viruses co-circulate. Signs and symptoms that best discriminated PCR confirmed Zika from other laboratory confirmed febrile or exanthematic diseases were identified to propose and test predictive models for Zika infection based on these clinical features. Our derived score prediction model had the best performance because it demonstrated the highest sensitivity and specificity, 86·6% and 78·3%, respectively. This Zika case definition also had the highest values for auROC (0·903) and R2 (0·417), and the lowest Brier score 0·096. In areas where multiple arboviruses circulate, the presence of rash with pruritus or conjunctival hyperemia, without any other general clinical manifestations such as fever, petechia or anorexia is the best Zika case definition.

  15. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  16. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  17. A systematic literature review of US definitions, scoring systems and validity according to the OMERACT filter for tendon lesion in RA and other inflammatory joint diseases.

    Science.gov (United States)

    Alcalde, María; D'Agostino, Maria Antonietta; Bruyn, George A W; Möller, Ingrid; Iagnocco, Annamaria; Wakefield, Richard J; Naredo, Esperanza

    2012-07-01

    To present the published data concerning the US assessment of tendon lesions as well as the US metric properties investigated in inflammatory arthritis. A systematic literature search of PubMed, Embase and the Cochrane Library was performed. Selection criteria were original articles in the English language reporting US, Doppler, tenosynovitis and other tendon lesions in patients with RA and other inflammatory arthritis. Data extraction focused on the definition and quantification of US-detected tenosynovitis and other tendon abnormalities and the metric properties of US according to the OMERACT filter for evaluating the above tendon lesions. Thirty-three of 192 identified articles were included in the review. Most articles were case series (42%) or case-control (33%) studies describing hand and/or foot tenosynovitis in RA patients. The majority of older articles used only B-mode, whereas the most recent studies have incorporated Doppler mode. Definition of tenosynovitis or other tendon lesion was provided in 70% of the evaluated studies. Most of the studies (61%) used a binary score for evaluating tendon abnormalities. Concerning the OMERACT filter, 24 (73%) articles dealt with construct validity. The comparator most commonly used was clinical assessment and MRI. There were few studies assessing criterion validity. Some studies evaluated reliability (36%), responsiveness (21%) and feasibility (12%). US seems a promising tool for evaluating inflammatory tendon lesions. However, further validation is necessary for implementation in clinical practice and trials.

  18. Optimization of active distribution networks: Design and analysis of significative case studies for enabling control actions of real infrastructure

    Directory of Open Access Journals (Sweden)

    Moneta Diana

    2014-01-01

    Full Text Available The diffusion of Distributed Generation (DG based on Renewable Energy Sources (RES requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER – DIStribution Company VoltagE Regulator is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of “case studies”, that are the combination of network topology, technical constraints and targets, load and generation profiles and “costs” of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids and actual battery characteristics are given, together with prospective performance on real case applications.

  19. Optimization of active distribution networks: Design and analysis of significative case studies for enabling control actions of real infrastructure

    Science.gov (United States)

    Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca

    2014-12-01

    The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.

  20. Simulation algorithm for spiral case structure in hydropower station

    Directory of Open Access Journals (Sweden)

    Xin-yong Xu

    2013-04-01

    Full Text Available In this study, the damage-plasticity model for concrete that was verified by the model experiment was used to calculate the damage to a spiral case structure based on the damage mechanics theory. The concrete structure surrounding the spiral case was simulated with a three-dimensional finite element model. Then, the distribution and evolution of the structural damage were studied. Based on investigation of the change of gap openings between the steel liner and concrete structure, the impact of the non-uniform variation of gaps on the load-bearing ratio between the steel liner and concrete structure was analyzed. The comparison of calculated results of the simplified and simulation algorithms shows that the simulation algorithm is a feasible option for the calculation of spiral case structures. In addition, the shell-spring model was introduced for optimization analysis, and the results were reasonable.

  1. a South African case study

    African Journals Online (AJOL)

    User

    learn different algorithms to solve problems, but in many cases cannot solve .... centre of Piaget‟s work is a fundamental cognitive process, which he termed ..... concept definition of continuity in calculus through collaborative instructional ...

  2. Evaluation of the WHO clinical case definition of AIDS among children in India.

    Science.gov (United States)

    Gurprit, Grover; Tripti, Pensi; Gadpayle, A K; Tanushree, Banerjee

    2008-03-01

    The need of a clinical case definition (CCD) for Acquired Immunodeficiency Syndrome (AIDS) was felt by public health agencies to monitor diseases resulting from human immunodeficiency virus (HIV) infection. To test the statistical significance of the existing World Health Organization (WHO) CCD for the diagnosis of AIDS in areas where diagnostic resources are limited in India, a prospective study was conducted in the Paediatrics department at Dr. Ram Manohar Lohia Hospital, New Delhi. 360 cases between 18 months-12 years of age satisfying WHO case definitions of AIDS were included in the study group. Informed consent was taken from the parents. The serum of patients was subjected to ELISA to conform the diagnosis of HIV infection. Our study detected 16.66% (60) of HIV prevalence in children visiting paediatrics outpatient clinic. 20% cases manifested 3 major and 2 minor signs. This definition had a sensitivity of 73.33%, specificity of 90.66%, positive predictive value (PPV) of 61.11% and negative predictive value (NPV) of 94.44%. On using stepwise logistic regression analysis weight loss, chronic fever > 1 month and total lymphocyte count of less than 1500 cells/mm3 emerged as important predictors. Cases showing 2 major and 2 minor signs were 86 (23.89%) with a sensitivity and specificity of 86.66% and 88.66% respectively. Based on these findings, we propose a clinical case definition based on 13 clinical signs and symptoms for paediatric AIDS in India with better sensitivity and PPV than the WHO case definition but with almost similar specificity. Thus multicentric studies are further required to modify these criteria in Indian set up.

  3. Accuracy of Zika virus disease case definition during simultaneous Dengue and Chikungunya epidemics.

    Directory of Open Access Journals (Sweden)

    José Ueleres Braga

    Full Text Available Zika is a new disease in the American continent and its surveillance is of utmost importance, especially because of its ability to cause neurological manifestations as Guillain-Barré syndrome and serious congenital malformations through vertical transmission. The detection of suspected cases by the surveillance system depends on the case definition adopted. As the laboratory diagnosis of Zika infection still relies on the use of expensive and complex molecular techniques with low sensitivity due to a narrow window of detection, most suspected cases are not confirmed by laboratory tests, mainly reserved for pregnant women and newborns. In this context, an accurate definition of a suspected Zika case is crucial in order for the surveillance system to gauge the magnitude of an epidemic.We evaluated the accuracy of various Zika case definitions in a scenario where Dengue and Chikungunya viruses co-circulate. Signs and symptoms that best discriminated PCR confirmed Zika from other laboratory confirmed febrile or exanthematic diseases were identified to propose and test predictive models for Zika infection based on these clinical features.Our derived score prediction model had the best performance because it demonstrated the highest sensitivity and specificity, 86·6% and 78·3%, respectively. This Zika case definition also had the highest values for auROC (0·903 and R2 (0·417, and the lowest Brier score 0·096.In areas where multiple arboviruses circulate, the presence of rash with pruritus or conjunctival hyperemia, without any other general clinical manifestations such as fever, petechia or anorexia is the best Zika case definition.

  4. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.

    Science.gov (United States)

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma

    2014-12-08

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment.

  5. Severe versus Moderate Criteria for the New Pediatric Case Definition for ME/CFS

    Science.gov (United States)

    Jason, Leonard; Porter, Nicole; Shelleby, Elizabeth; Till, Lindsay; Bell, David S.; Lapp, Charles W.; Rowe, Kathy; De Meirleir, Kenny

    2009-01-01

    The new diagnostic criteria for pediatric ME/CFS are structurally based on the Canadian Clinical Adult case definition, and have more required specific symptoms than the (Fukuda et al. Ann Intern Med 121:953-959, 1994) adult case definition. Physicians specializing in pediatric ME/CFS referred thirty-three pediatric patients with ME/CFS and 21…

  6. Validity of administrative database code algorithms to identify vascular access placement, surgical revisions, and secondary patency.

    Science.gov (United States)

    Al-Jaishi, Ahmed A; Moist, Louise M; Oliver, Matthew J; Nash, Danielle M; Fleet, Jamie L; Garg, Amit X; Lok, Charmaine E

    2018-03-01

    We assessed the validity of physician billing codes and hospital admission using International Classification of Diseases 10th revision codes to identify vascular access placement, secondary patency, and surgical revisions in administrative data. We included adults (≥18 years) with a vascular access placed between 1 April 2004 and 31 March 2013 at the University Health Network, Toronto. Our reference standard was a prospective vascular access database (VASPRO) that contains information on vascular access type and dates of placement, dates for failure, and any revisions. We used VASPRO to assess the validity of different administrative coding algorithms by calculating the sensitivity, specificity, and positive predictive values of vascular access events. The sensitivity (95% confidence interval) of the best performing algorithm to identify arteriovenous access placement was 86% (83%, 89%) and specificity was 92% (89%, 93%). The corresponding numbers to identify catheter insertion were 84% (82%, 86%) and 84% (80%, 87%), respectively. The sensitivity of the best performing coding algorithm to identify arteriovenous access surgical revisions was 81% (67%, 90%) and specificity was 89% (87%, 90%). The algorithm capturing arteriovenous access placement and catheter insertion had a positive predictive value greater than 90% and arteriovenous access surgical revisions had a positive predictive value of 20%. The duration of arteriovenous access secondary patency was on average 578 (553, 603) days in VASPRO and 555 (530, 580) days in administrative databases. Administrative data algorithms have fair to good operating characteristics to identify vascular access placement and arteriovenous access secondary patency. Low positive predictive values for surgical revisions algorithm suggest that administrative data should only be used to rule out the occurrence of an event.

  7. Cascaded face alignment via intimacy definition feature

    Science.gov (United States)

    Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin

    2017-09-01

    Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.

  8. The quest for a universal definition of polytrauma: a trauma registry-based validation study.

    Science.gov (United States)

    Butcher, Nerida E; D'Este, Catherine; Balogh, Zsolt J

    2014-10-01

    A pilot validation recommended defining polytrauma as patients with an Abbreviated Injury Scale (AIS) score greater than 2 in at least two Injury Severity Score (ISS) body regions (2 × AIS score > 2). This study aimed to validate this definition on larger data set. We hypothesized that patients defined by the 2 × AIS score > 2 cutoff have worse outcomes and use more resources than those without 2 × AIS score > 2 and that this would therefore be a better definition of polytrauma. Patients injured between 2009 and 2011, with complete documentation of AIS by New South Wales Trauma Registry and 16 years and older were selected. Age and sex were obtained in addition to outcomes of ISS, hospital length of stay (LOS), intensive care unit (ICU) admission, ICU LOS, and mortality. We compared demographic characteristics and outcomes between patients with ISS greater than 15 who did and did not meet the 2 × AIS score > 2 definition. We then undertook regression analyses (logistic regression for binary outcomes [ICU admission and death] and linear regression for hospital and ICU LOS) to compare outcomes for patients with and without 2 × AIS score > 2, adjusting for sex and age categories. In the adjusted analyses, patients with 2 × AIS score > 2 had twice the odds of being admitted to the ICU compared with those without 2 × AIS score > 2 (odds ratio, 2.5; 95% confidence interval [CI], 2.2-2.8) and 1.7 times the odds of dying (95% CI, 1.4-2.0; p 2 also had a mean difference of 1.5 days longer stay in the hospital compared with those without 2 × AIS score > 2 (95% CI, 1.4-1.7) and 1.6 days longer ICU stay (95% CI, 1.4-1.8; p 2 had higher mortality, more frequent ICU admissions, and longer hospital and ICU stay than those without 2 × AIS score > 2 and represents a superior definition to the definitions for polytrauma currently in use. Diagnostic test/ criteria, level III.

  9. Comparing definitions in guidelines and written standards - a case study: 'Trueness'

    International Nuclear Information System (INIS)

    Pavese, F

    2010-01-01

    This paper describes the structure of a repository initiated by IMEKO TC21 to allow the comparison of different definitions and use of the same term or concept in written standards and guidelines available internationally. The method used is illustrated for a case study: the critical concept of 'trueness' and its definitions.

  10. Class hierarchical test case generation algorithm based on expanded EMDPN model

    Institute of Scientific and Technical Information of China (English)

    LI Jun-yi; GONG Hong-fang; HU Ji-ping; ZOU Bei-ji; SUN Jia-guang

    2006-01-01

    A new model of event and message driven Petri network(EMDPN) based on the characteristic of class interaction for messages passing between two objects was extended. Using EMDPN interaction graph, a class hierarchical test-case generation algorithm with cooperated paths (copaths) was proposed, which can be used to solve the problems resulting from the class inheritance mechanism encountered in object-oriented software testing such as oracle, message transfer errors, and unreachable statement. Finally, the testing sufficiency was analyzed with the ordered sequence testing criterion(OSC). The results indicate that the test cases stemmed from newly proposed automatic algorithm of copaths generation satisfies synchronization message sequences testing criteria, therefore the proposed new algorithm of copaths generation has a good coverage rate.

  11. Acute respiratory infection case definitions for young children: a systematic review of community-based epidemiologic studies in South Asia.

    Science.gov (United States)

    Roth, Daniel E; Gaffey, Michelle F; Smith-Romero, Evelyn; Fitzpatrick, Tiffany; Morris, Shaun K

    2015-12-01

    To explore the variability in childhood acute respiratory infection case definitions for research in low-income settings where there is limited access to laboratory or radiologic investigations. We conducted a systematic review of community-based, longitudinal studies in South Asia published from January 1990 to August 2013, in which childhood acute respiratory infection outcomes were reported. Case definitions were classified by their label (e.g. pneumonia, acute lower respiratory infection) and clinical content 'signatures' (array of clinical features that would be always present, conditionally present or always absent among cases). Case definition heterogeneity was primarily assessed by the number of unique case definitions overall and by label. We also compared case definition-specific acute respiratory infection incidence rates for studies reporting incidence rates for multiple case definitions. In 56 eligible studies, we found 124 acute respiratory infection case definitions. Of 90 case definitions for which clinical content was explicitly defined, 66 (73%) were unique. There was a high degree of content heterogeneity among case definitions with the same label, and some content signatures were assigned multiple labels. Within studies for which incidence rates were reported for multiple case definitions, variation in content was always associated with a change in incidence rate, even when the content differed by a single clinical feature. There has been a wide variability in case definition label and content combinations to define acute upper and lower respiratory infections in children in community-based studies in South Asia over the past two decades. These inconsistencies have important implications for the synthesis and translation of knowledge regarding the prevention and treatment of childhood acute respiratory infection. © 2015 John Wiley & Sons Ltd.

  12. An Automated Defect Prediction Framework using Genetic Algorithms: A Validation of Empirical Studies

    Directory of Open Access Journals (Sweden)

    Juan Murillo-Morera

    2016-05-01

    Full Text Available Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding software practitioners. With timely and accurate defect predictions, practitioners can focus their limited testing resources on higher risk areas. This paper reports the results of three empirical studies that uses an automated genetic defect prediction framework. This framework generates and compares different learning schemes (preprocessing + attribute selection + learning algorithms and selects the best one using a genetic algorithm, with the objective to estimate the defect proneness of a software module. The first empirical study is a performance comparison of our framework with the most important framework of the literature. The second empirical study is a performance and runtime comparison between our framework and an exhaustive framework. The third empirical study is a sensitivity analysis. The last empirical study, is our main contribution in this paper. Performance of the software development defect prediction models (using AUC, Area Under the Curve was validated using NASA-MDP and PROMISE data sets. Seventeen data sets from NASA-MDP (13 and PROMISE (4 projects were analyzed running a NxM-fold cross-validation. A genetic algorithm was used to select the components of the learning schemes automatically, and to assess and report the results. Our results reported similar performance between frameworks. Our framework reported better runtime than exhaustive framework. Finally, we reported the best configuration according to sensitivity analysis.

  13. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    Science.gov (United States)

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm

  14. Guidelines for Interactive Reliability-Based Structural Optimization using Quasi-Newton Algorithms

    DEFF Research Database (Denmark)

    Pedersen, C.; Thoft-Christensen, Palle

    increase of the condition number and preserve positive definiteness without discarding previously obtained information. All proposed modifications are also valid for non-interactive optimization problems. Heuristic rules from various optimization problems concerning when and how to impose interactions......Guidelines for interactive reliability-based structural optimization problems are outlined in terms of modifications of standard quasi-Newton algorithms. The proposed modifications minimize the condition number of the approximate Hessian matrix in each iteration, restrict the relative and absolute...

  15. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    Science.gov (United States)

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  16. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  17. Controlling for Frailty in Pharmacoepidemiologic Studies of Older Adults: Validation of an Existing Medicare Claims-based Algorithm.

    Science.gov (United States)

    Cuthbertson, Carmen C; Kucharska-Newton, Anna; Faurot, Keturah R; Stürmer, Til; Jonsson Funk, Michele; Palta, Priya; Windham, B Gwen; Thai, Sydney; Lund, Jennifer L

    2018-07-01

    Frailty is a geriatric syndrome characterized by weakness and weight loss and is associated with adverse health outcomes. It is often an unmeasured confounder in pharmacoepidemiologic and comparative effectiveness studies using administrative claims data. Among the Atherosclerosis Risk in Communities (ARIC) Study Visit 5 participants (2011-2013; n = 3,146), we conducted a validation study to compare a Medicare claims-based algorithm of dependency in activities of daily living (or dependency) developed as a proxy for frailty with a reference standard measure of phenotypic frailty. We applied the algorithm to the ARIC participants' claims data to generate a predicted probability of dependency. Using the claims-based algorithm, we estimated the C-statistic for predicting phenotypic frailty. We further categorized participants by their predicted probability of dependency (<5%, 5% to <20%, and ≥20%) and estimated associations with difficulties in physical abilities, falls, and mortality. The claims-based algorithm showed good discrimination of phenotypic frailty (C-statistic = 0.71; 95% confidence interval [CI] = 0.67, 0.74). Participants classified with a high predicted probability of dependency (≥20%) had higher prevalence of falls and difficulty in physical ability, and a greater risk of 1-year all-cause mortality (hazard ratio = 5.7 [95% CI = 2.5, 13]) than participants classified with a low predicted probability (<5%). Sensitivity and specificity varied across predicted probability of dependency thresholds. The Medicare claims-based algorithm showed good discrimination of phenotypic frailty and high predictive ability with adverse health outcomes. This algorithm can be used in future Medicare claims analyses to reduce confounding by frailty and improve study validity.

  18. Criteria of the validation of experimental and evaluated covariance data

    International Nuclear Information System (INIS)

    Badikov, S.

    2008-01-01

    The criteria of the validation of experimental and evaluated covariance data are reviewed. In particular: a) the criterion of the positive definiteness for covariance matrices, b) the relationship between the 'integral' experimental and estimated uncertainties, c) the validity of the statistical invariants, d) the restrictions imposed to correlations between experimental errors, are described. Applying these criteria in nuclear data evaluation was considered and 4 particular points have been examined. First preserving positive definiteness of covariance matrices in case of arbitrary transformation of a random vector was considered, properties of the covariance matrices in operations widely used in neutron and reactor physics (splitting and collapsing energy groups, averaging the physical values over energy groups, estimation parameters on the basis of measurements by means of generalized least squares method) were studied. Secondly, an algorithm for comparison of experimental and estimated 'integral' uncertainties was developed, square root of determinant of a covariance matrix is recommended for use in nuclear data evaluation as a measure of 'integral' uncertainty for vectors of experimental and estimated values. Thirdly, a set of statistical invariants-values which are preserved in statistical processing was presented. And fourthly, the inequality that signals a correlation between experimental errors that leads to unphysical values is given. An application is given concerning the cross-section of the (n,t) reaction on Li 6 with a neutron incident energy comprised between 1 and 100 keV

  19. A validation of the new definition of drug-resistant epilepsy by the International League Against Epilepsy.

    Science.gov (United States)

    Téllez-Zenteno, Jose F; Hernández-Ronquillo, Lizbeth; Buckley, Samantha; Zahagun, Ricardo; Rizvi, Syed

    2014-06-01

    To establish applicability, the recently proposed International League Against Epilepsy (ILAE) consensus on drug-resistant epilepsy (DRE) requires testing in clinical and research settings. This study evaluates the reliability and validity of these criteria in a clinical population. In phase I, two independent evaluators reviewed 97 randomly selected medical records of patients with epilepsy at two separate intervals. Both ILEA consensus and standard diagnostic criteria were employed. Kappa, weighted kappa, and intraclass correlation coefficient (ICC) were used to determine interobserver and intraobserver variability. In phase II, ILAE consensus criteria were applied to 250 patients with epilepsy to determine risk factors associated with development of DRE and to calculate point prevalence. The interobserver agreement of the four definitions was as follows: Berg (0.56), Kwan and Brodie (0.58), Camfield and Camfield (0.69), and ILAE (0.77). The intraobserver agreement of the four definition was as follows: Berg (0.81), Kwan and Brodie (0.82), Camfield and Camfield (0.72), and ILAE (0.82). The prevalence of DRE was the following: with the Berg's definition was 28.4%, Kwan and Brodie 34%, Camfield and Camfield 37%, and with ILAE was 33%. This is first study to establish reliability and validity of ILAE criteria for the diagnosis of DRE. This new definition compares favorably with previously established constructs, which continue to retain clinical significance. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  20. Monte Carlo evaluation of the convolution/superposition algorithm of Hi-Art tomotherapy in heterogeneous phantoms and clinical cases

    International Nuclear Information System (INIS)

    Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S.

    2009-01-01

    The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm 2 field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.

  1. Hadoop The Definitive Guide

    CERN Document Server

    White, Tom

    2009-01-01

    Hadoop: The Definitive Guide helps you harness the power of your data. Ideal for processing large datasets, the Apache Hadoop framework is an open source implementation of the MapReduce algorithm on which Google built its empire. This comprehensive resource demonstrates how to use Hadoop to build reliable, scalable, distributed systems: programmers will find details for analyzing large datasets, and administrators will learn how to set up and run Hadoop clusters. Complete with case studies that illustrate how Hadoop solves specific problems, this book helps you: Use the Hadoop Distributed

  2. Validation of near infrared satellite based algorithms to relative atmospheric water vapour content over land

    International Nuclear Information System (INIS)

    Serpolla, A.; Bonafoni, S.; Basili, P.; Biondi, R.; Arino, O.

    2009-01-01

    This paper presents the validation results of ENVISAT MERIS and TERRA MODIS retrieval algorithms for atmospheric Water Vapour Content (WVC) estimation in clear sky condition on land. The MERIS algorithms exploits the radiance ratio of the absorbing channel at 900 nm with the almost absorption-free reference at 890 nm, while the MODIS one is based on the ratio of measurements centred at near 0.905, 0.936, and 0.94 μm with atmospheric window reflectance at 0.865 and 1.24 μm. The first test was performed in the Mediterranean area using WVC provided from both ECMWF and AERONET. As a second step, the performances of the algorithms were tested exploiting WVC computed from radio sounding (RAOBs)in the North East Australia. The different comparisons with respect to reference WVC values showed an overestimation of WVC by MODIS (root mean square error percentage greater than 20%) and an acceptable performance of MERIS algorithms (root mean square error percentage around 10%) [it

  3. A micro-hydrology computation ordering algorithm

    Science.gov (United States)

    Croley, Thomas E.

    1980-11-01

    Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.

  4. A micro-hydrology computation ordering algorithm

    International Nuclear Information System (INIS)

    Croley, T.E. II

    1980-01-01

    Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented node definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing micro-hydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies. (orig.)

  5. Development and validation of an algorithm for identifying urinary retention in a cohort of patients with epilepsy in a large US administrative claims database.

    Science.gov (United States)

    Quinlan, Scott C; Cheng, Wendy Y; Ishihara, Lianna; Irizarry, Michael C; Holick, Crystal N; Duh, Mei Sheng

    2016-04-01

    The aim of this study was to develop and validate an insurance claims-based algorithm for identifying urinary retention (UR) in epilepsy patients receiving antiepileptic drugs to facilitate safety monitoring. Data from the HealthCore Integrated Research Database(SM) in 2008-2011 (retrospective) and 2012-2013 (prospective) were used to identify epilepsy patients with UR. During the retrospective phase, three algorithms identified potential UR: (i) UR diagnosis code with a catheterization procedure code; (ii) UR diagnosis code alone; or (iii) diagnosis with UR-related symptoms. Medical records for 50 randomly selected patients satisfying ≥1 algorithm were reviewed by urologists to ascertain UR status. Positive predictive value (PPV) and 95% confidence intervals (CI) were calculated for the three component algorithms and the overall algorithm (defined as satisfying ≥1 component algorithms). Algorithms were refined using urologist review notes. In the prospective phase, the UR algorithm was refined using medical records for an additional 150 cases. In the retrospective phase, the PPV of the overall algorithm was 72.0% (95%CI: 57.5-83.8%). Algorithm 3 performed poorly and was dropped. Algorithm 1 was unchanged; urinary incontinence and cystitis were added as exclusionary diagnoses to Algorithm 2. The PPV for the modified overall algorithm was 89.2% (74.6-97.0%). In the prospective phase, the PPV for the modified overall algorithm was 76.0% (68.4-82.6%). Upon adding overactive bladder, nocturia and urinary frequency as exclusionary diagnoses, the PPV for the final overall algorithm was 81.9% (73.7-88.4%). The current UR algorithm yielded a PPV > 80% and could be used for more accurate identification of UR among epilepsy patients in a large claims database. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    Directory of Open Access Journals (Sweden)

    Murray Christopher JL

    2011-08-01

    Full Text Available Abstract Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff, which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science.

  7. Research on Cigarettes Customer Needs Importance Algorithm Based on KJ / RAHP / KANO

    Directory of Open Access Journals (Sweden)

    Ni Xiong-Jun

    2017-01-01

    Full Text Available To express the ambiguity and uncertainty of customer needs importance, an algorithm was proposed. It integrated KJ method, rough analytic Hierarchy Process and KANO model. It calculated the customer needs importance in rough set. A case study of cigarettes customer needs importance illustrated the feasibility and validity of the algorithm.

  8. GOCI Yonsei Aerosol Retrieval (YAER) Algorithm and Validation During the DRAGON-NE Asia 2012 Campaign

    Science.gov (United States)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.; hide

    2016-01-01

    The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGONNE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Angstrom exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 x AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better agreement

  9. GOCI Yonsei Aerosol Retrieval (YAER) algorithm and validation during the DRAGON-NE Asia 2012 campaign

    Science.gov (United States)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.; Song, Chul H.; Lim, Jae-Hyun; Song, Chang-Keun

    2016-04-01

    The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGON-NE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Ångström exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 × AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better

  10. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  11. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  12. Development and validation of a prediction algorithm for the onset of common mental disorders in a working population.

    Science.gov (United States)

    Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas

    2018-01-01

    Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.

  13. The ANACONDA algorithm for deformable image registration in radiotherapy

    International Nuclear Information System (INIS)

    Weistrand, Ola; Svensson, Stina

    2015-01-01

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularization term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA

  14. Enhancing case definitions for surveillance of human monkeypox in the Democratic Republic of Congo.

    Directory of Open Access Journals (Sweden)

    Lynda Osadebe

    2017-09-01

    Full Text Available Human monkeypox (MPX occurs at appreciable rates in the Democratic Republic of Congo (DRC. Infection with varicella zoster virus (VZV has a similar presentation to that of MPX, and in areas where MPX is endemic these two illnesses are commonly mistaken. This study evaluated the diagnostic utility of two surveillance case definitions for MPX and specific clinical characteristics associated with laboratory-confirmed MPX cases.Data from a cohort of suspect MPX cases (identified by surveillance over the course of a 42 month period during 2009-2014 from DRC were used; real-time PCR diagnostic test results were used to establish MPX and VZV diagnoses. A total of 333 laboratory-confirmed MPX cases, 383 laboratory-confirmed VZV cases, and 36 cases that were determined to not be either MPX or VZV were included in the analyses. Significant (p<0.05 differences between laboratory-confirmed MPX and VZV cases were noted for several signs/symptoms including key rash characteristics. Both surveillance case definitions had high sensitivity and low specificities for individuals that had suspected MPX virus infections. Using 12 signs/symptoms with high sensitivity and/or specificity values, a receiver operator characteristic analysis showed that models for MPX cases that had the presence of 'fever before rash' plus at least 7 or 8 of the 12 signs/symptoms demonstrated a more balanced performance between sensitivity and specificity.Laboratory-confirmed MPX and VZV cases presented with many of the same signs and symptoms, and the analysis here emphasized the utility of including 12 specific signs/symptoms when investigating MPX cases. In order to document and detect endemic human MPX cases, a surveillance case definition with more specificity is needed for accurate case detection. In the absence of a more specific case definition, continued emphasis on confirmatory laboratory-based diagnostics is warranted.

  15. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  16. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  17. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    Science.gov (United States)

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  18. Generic project definitions for improvement of health care delivery: a case-based approach.

    Science.gov (United States)

    Niemeijer, Gerard C; Does, Ronald J M M; de Mast, Jeroen; Trip, Albert; van den Heuvel, Jaap

    2011-01-01

    The purpose of this article is to create actionable knowledge, making the definition of process improvement projects in health care delivery more effective. This study is a retrospective analysis of process improvement projects in hospitals, facilitating a case-based reasoning approach to project definition. Data sources were project documentation and hospital-performance statistics of 271 Lean Six Sigma health care projects from 2002 to 2009 of general, teaching, and academic hospitals in the Netherlands and Belgium. Objectives and operational definitions of improvement projects in the sample, analyzed and structured in a uniform format and terminology. Extraction of reusable elements of earlier project definitions, presented in the form of 9 templates called generic project definitions. These templates function as exemplars for future process improvement projects, making the selection, definition, and operationalization of similar projects more efficient. Each template includes an explicated rationale, an operationalization in the form of metrics, and a prototypical example. Thus, a process of incremental and sustained learning based on case-based reasoning is facilitated. The quality of project definitions is a crucial success factor in pursuits to improve health care delivery. We offer 9 tried and tested improvement themes related to patient safety, patient satisfaction, and business-economic performance of hospitals.

  19. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi.

    Science.gov (United States)

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-07-01

    Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. A two-domain real-time algorithm for optimal data reduction: A case study on accelerator magnet measurements

    CERN Document Server

    Arpaia, P; Inglese, V

    2010-01-01

    A real-time algorithm of data reduction, based on the combination a two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement syste...

  1. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    Science.gov (United States)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  2. Validation Study of a Predictive Algorithm to Evaluate Opioid Use Disorder in a Primary Care Setting

    Science.gov (United States)

    Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A.

    2017-01-01

    Background: Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm (“profile”) incorporating phenotypic and, more uniquely, genotypic risk factors. Methods and Results: In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. Conclusion: The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes. PMID:28890908

  3. Evaluation of an expanded case definition for vaccine-modified measles in a school outbreak in South Korea in 2010.

    Science.gov (United States)

    Choe, Young June; Hu, Jae Kyung; Song, Kyung Min; Cho, Heeyeon; Yoon, Hee Sook; Kim, Seung Tae; Lee, Han Jung; Kim, Kisoon; Bae, Geun-Ryang; Lee, Jong-Koo

    2012-01-01

    In this study, we have described the clinical characteristics of vaccine-modified measles to assess the performance of an expanded case definition in a school outbreak that occurred in 2010. The sensitivity, specificity, and the positive and negative predictive values were evaluated. Among 74 cases of vaccine-modified measles, 47 (64%) met the original case definition. Fever and rash were observed in 73% (54/74); fever was the most common (96%, 71/74) presenting symptom, and rash was noted in 77% (57/74) of the cases. The original case definition showed an overall sensitivity of 63.5% and a specificity of 100.0%. The expanded case definition combining fever and rash showed a higher sensitivity (72.9%) but a lower specificity (88.2%) than the original. The presence of fever and one or more of cough, coryza, or conjunctivitis scored the highest sensitivity among the combinations of signs and symptoms (77.0%), but scored the lowest specificity (52.9%). The expanded case definition was sensitive in identifying suspected cases of vaccine-modified measles. We suggest using this expanded definition for outbreak investigation in a closed community, and consider further discussions on expanding the case definition of measles for routine surveillance in South Korea.

  4. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  5. Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement (ADVANCE) Technology Development for Resilient Flight Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...

  6. Aggressive periodontitis: case definition and diagnostic criteria.

    Science.gov (United States)

    Albandar, Jasim M

    2014-06-01

    Aggressive periodontitis is a destructive disease characterized by the following: the involvement of multiple teeth with a distinctive pattern of periodontal tissue loss; a high rate of disease progression; an early age of onset; and the absence of systemic diseases. In some patients periodontal tissue loss may commence before puberty, whereas in most patients the age of onset is during or somewhat after the circumpubertal period. Besides infection with specific microorganisms, a host predisposition seems to play a key role in the pathogenesis of aggressive periodontitis, as evidenced by the familial aggregation of the disease. In this article we review the historical background of the diagnostic criteria of aggressive periodontitis, present a contemporary case definition and describe the clinical parameters of the disease. At present, the diagnosis of aggressive periodontitis is achieved using case history, clinical examination and radiographic evaluation. The data gathered using these methods are prone to relatively high measurement errors. Besides, this diagnostic approach measures past disease history and may not reliably measure existing disease activity or accurately predict future tissue loss. A diagnosis is often made years after the onset of the disease, partly because current assessment methods detect established disease more readily and reliably than they detect incipient or initial lesions where the tissue loss is minimal and usually below the detection threshold of present examination methods. Future advancements in understanding the pathogenesis of this disease may contribute to an earlier diagnosis. Insofar, future case definitions may involve the identification of key etiologic and risk factors, combined with high-precision methodologies that enable the early detection of initial lesions. This may significantly enhance the predictive value of these tests and detect cases of aggressive periodontitis before significant tissue loss develops. © 2014

  7. The metabolic syndrome: validity and utility of clinical definitions for cardiovascular disease and diabetes risk prediction.

    Science.gov (United States)

    Cameron, Adrian

    2010-02-01

    The purpose of clinical definitions of the metabolic syndrome is frequently misunderstood. While the metabolic syndrome as a physiological process describes a clustering of numerous age-related metabolic abnormalities that together increase the risk for cardiovascular disease and type 2 diabetes, clinical definitions include obesity which is thought to be a cause rather than a consequence of metabolic disturbance, and several elements that are routinely measured in clinical practice, including high blood pressure, high blood glucose and dyslipidaemia. Obesity is frequently a central player in the development of the metabolic syndrome and should be considered a key component of clinical definitions. Previous clinical definitions have differed in the priority given to obesity. Perhaps more importantly than its role in a clinical definition, however, is obesity in isolation before the hallmarks of metabolic dysfunction that typify the syndrome have developed. This should be treated seriously as an opportunity to prevent the consequences of the global diabetes epidemic now apparent. Clinical definitions were designed to identify a population at high lifetime CVD and type 2 diabetes risk, but in the absence of several major risk factors for each condition, are not optimal risk prediction devices for either. Despite this, the metabolic syndrome has several properties that make it a useful construct, in conjunction with short-term risk prediction algorithms and sound clinical judgement, for the identification of those at high lifetime risk of CVD and diabetes. A recently published consensus definition provides some much needed clarity about what a clinical definition entails. Even this, however, remains a work in progress until more evidence becomes available, particularly in the area of ethnicity-specific waist cut-points. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  8. Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes

    Science.gov (United States)

    Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.

    2018-04-01

    To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.

  9. Validation of an automated seizure detection algorithm for term neonates

    Science.gov (United States)

    Mathieson, Sean R.; Stevenson, Nathan J.; Low, Evonne; Marnane, William P.; Rennie, Janet M.; Temko, Andrey; Lightbody, Gordon; Boylan, Geraldine B.

    2016-01-01

    Objective The objective of this study was to validate the performance of a seizure detection algorithm (SDA) developed by our group, on previously unseen, prolonged, unedited EEG recordings from 70 babies from 2 centres. Methods EEGs of 70 babies (35 seizure, 35 non-seizure) were annotated for seizures by experts as the gold standard. The SDA was tested on the EEGs at a range of sensitivity settings. Annotations from the expert and SDA were compared using event and epoch based metrics. The effect of seizure duration on SDA performance was also analysed. Results Between sensitivity settings of 0.5 and 0.3, the algorithm achieved seizure detection rates of 52.6–75.0%, with false detection (FD) rates of 0.04–0.36 FD/h for event based analysis, which was deemed to be acceptable in a clinical environment. Time based comparison of expert and SDA annotations using Cohen’s Kappa Index revealed a best performing SDA threshold of 0.4 (Kappa 0.630). The SDA showed improved detection performance with longer seizures. Conclusion The SDA achieved promising performance and warrants further testing in a live clinical evaluation. Significance The SDA has the potential to improve seizure detection and provide a robust tool for comparing treatment regimens. PMID:26055336

  10. Automated planning volume definition in soft-tissue sarcoma adjuvant brachytherapy

    International Nuclear Information System (INIS)

    Lee, Eva K.; Fung, Albert Y.C.; Zaider, Marco; Brooks, J. Paul

    2002-01-01

    In current practice, the planning volume for adjuvant brachytherapy treatment for soft-tissue sarcoma is either not determined a priori (in this case, seed locations are selected based on isodose curves conforming to a visual estimate of the planning volume), or it is derived via a tedious manual process. In either case, the process is subjective and time consuming, and is highly dependent on the human planner. The focus of the work described herein involves the development of an automated contouring algorithm to outline the planning volume. Such an automatic procedure will save time and provide a consistent and objective method for determining planning volumes. In addition, a definitive representation of the planning volume will allow for sophisticated brachytherapy treatment planning approaches to be applied when designing treatment plans, so as to maximize local tumour control and minimize normal tissue complications. An automated tumour volume contouring algorithm is developed utilizing computational geometry and numerical interpolation techniques in conjunction with an artificial intelligence method. The target volume is defined to be the slab of tissue r cm perpendicularly away from the curvilinear plane defined by the mesh of catheters. We assume that if adjacent catheters are over 2r cm apart, the tissue between the two catheters is part of the tumour bed. Input data consist of the digitized coordinates of the catheter positions in each of several cross-sectional slices of the tumour bed, and the estimated distance r from the catheters to the tumour surface. Mathematically, one can view the planning volume as the volume enclosed within a minimal smoothly-connected surface which contains a set of circles, each circle centred at a given catheter position in a given cross-sectional slice. The algorithm performs local interpolation on consecutive triplets of circles. The effectiveness of the algorithm is evaluated based on its performance on a collection of

  11. Automated planning volume definition in soft-tissue sarcoma adjuvant brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eva K. [Department of Radiation Oncology, Emory University School of Medicine, Atlanta, GA (United States); School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA (United States); Fung, Albert Y.C.; Zaider, Marco [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY (United States); Brooks, J. Paul [School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA (United States)

    2002-06-07

    In current practice, the planning volume for adjuvant brachytherapy treatment for soft-tissue sarcoma is either not determined a priori (in this case, seed locations are selected based on isodose curves conforming to a visual estimate of the planning volume), or it is derived via a tedious manual process. In either case, the process is subjective and time consuming, and is highly dependent on the human planner. The focus of the work described herein involves the development of an automated contouring algorithm to outline the planning volume. Such an automatic procedure will save time and provide a consistent and objective method for determining planning volumes. In addition, a definitive representation of the planning volume will allow for sophisticated brachytherapy treatment planning approaches to be applied when designing treatment plans, so as to maximize local tumour control and minimize normal tissue complications. An automated tumour volume contouring algorithm is developed utilizing computational geometry and numerical interpolation techniques in conjunction with an artificial intelligence method. The target volume is defined to be the slab of tissue r cm perpendicularly away from the curvilinear plane defined by the mesh of catheters. We assume that if adjacent catheters are over 2r cm apart, the tissue between the two catheters is part of the tumour bed. Input data consist of the digitized coordinates of the catheter positions in each of several cross-sectional slices of the tumour bed, and the estimated distance r from the catheters to the tumour surface. Mathematically, one can view the planning volume as the volume enclosed within a minimal smoothly-connected surface which contains a set of circles, each circle centred at a given catheter position in a given cross-sectional slice. The algorithm performs local interpolation on consecutive triplets of circles. The effectiveness of the algorithm is evaluated based on its performance on a collection of

  12. Accuracy of clinical diagnosis versus the World Health Organization case definition in the Amoy Garden SARS cohort.

    Science.gov (United States)

    Wong, W N; Sek, Antonio C H; Lau, Rick F L; Li, K M; Leung, Joe K S; Tse, M L; Ng, Andy H W; Stenstrom, Robert

    2003-11-01

    To compare the diagnostic accuracy of emergency department (ED) physicians with the World Health Organization (WHO) case definition in a large community-based SARS (severe acute respiratory syndrome) cohort. This was a cohort study of all patients from Hong Kong's Amoy Garden complex who presented to an ED SARS screening clinic during a 2-month outbreak. Clinical findings and WHO case definition criteria were recorded, along with ED diagnoses. Final diagnoses were established independently based on relevant diagnostic tests performed after the ED visit. Emergency physician diagnostic accuracy was compared with that of the WHO SARS case definition. Sensitivity, specificity, predictive values and likelihood ratios were calculated using standard formulae. During the study period, 818 patients presented with SARS-like symptoms, including 205 confirmed SARS, 35 undetermined SARS and 578 non-SARS. Sensitivity, specificity and accuracy were 91%, 96% and 94% for ED clinical diagnosis, versus 42%, 86% and 75% for the WHO case definition. Positive likelihood ratios (LR+) were 21.1 for physician judgement and 3.1 for the WHO criteria. Negative likelihood ratios (LR-) were 0.10 for physician judgement and 0.67 for the WHO criteria, indicating that clinician judgement was a much more powerful predictor than the WHO criteria. Physician clinical judgement was more accurate than the WHO case definition. Reliance on the WHO case definition as a SARS screening tool may lead to an unacceptable rate of misdiagnosis. The SARS case definition must be revised if it is to be used as a screening tool in emergency departments and primary care settings.

  13. Simulated annealing algorithm for solving chambering student-case assignment problem

    Science.gov (United States)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  14. The definition of radiological signs in gastric ulcer and assessment of their validity by inter-observer variation study.

    Science.gov (United States)

    Schulman, A; Simpkins, K C

    1975-07-01

    The initial aim was to program a computer with information on the frequency of radiological signs in benign and malignant gastric ulcers in order to obtain a percentage probability of benignancy or malignancy in succeeding ulcers in clinical practice. However, only four of the many signs described in gastric ulcer were confirmed to be of validity (i.e. reliable existence) by an inter-observer variation study using two observers and the films from 69 barium meal examinations. These were projection or non-projection of the in-profile ulcer, presence or absence of adjacent mucosal folds, good or poor definition of the in-face ulcer's edge, and extension of radiating folds to the in-face ulcer's edge. A few more remained unassessed due to insufficient numbers of relevant cases. It is condluced that: as defined in the literature the majority of radiological signs in this field are of uncertain existence; and the four that were found to be valid do not fully describe the important appearances that may be seen in benign and malignant ulcers and would be inadequate to differentiate them to a sufficiently high degree of probability.

  15. Revision of clinical case definitions: influenza-like illness and severe acute respiratory infection.

    NARCIS (Netherlands)

    Fitzner, Julia; Qasmieh, Saba; Mounts, Anthony Wayne; Alexander, Burmaa; Besselaar, Terry; Briand, Sylvie; Brown, Caroline; Clark, Seth; Dueger, Erica; Gross, Diane; Hauge, Siri; Hirve, Siddhivinayak; Jorgensen, Pernille; Katz, Mark A; Mafi, Ali; Malik, Mamunur; McCarron, Margaret; Meerhoff, Tamara; Mori, Yuichiro; Mott, Joshua; Olivera, Maria Teresa da Costa; Ortiz, Justin R; Palekar, Rakhee; Rebelo-de-Andrade, Helena; Soetens, Loes; Yahaya, Ali Ahmed; Zhang, Wenqing; Vandemaele, Katelijn

    2018-01-01

    The formulation of accurate clinical case definitions is an integral part of an effective process of public health surveillance. Although such definitions should, ideally, be based on a standardized and fixed collection of defining criteria, they often require revision to reflect new knowledge of

  16. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    NARCIS (Netherlands)

    Wognum, S.; Heethuis, S. E.; Rosario, T.; Hoogeman, M. S.; Bel, A.

    2014-01-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations.

  17. Predictive validity of different definitions of hypertension for type 2 diabetes.

    Science.gov (United States)

    Gulliford, Martin C; Charlton, Judith; Latinovic, Radoslav

    2006-01-01

    Models to predict diabetes or pre-diabetes often incorporate the assessment of hypertension, but proposed definitions for 'hypertension' are inconsistent. We compared the classifications obtained using different definitions for 'hypertension'. We compared records for 5158 cases from 181 family practices, who were later diagnosed with diabetes and prescribed oral hypoglycaemic drugs, with 5158 controls, matched for age, sex and family practice, who were never diagnosed with diabetes. We compared classifications obtained using definitions of hypertension based on medical diagnoses, prescription of blood pressure lowering drugs or both. We compared family practices where diagnosis or prescribing varied systematically. Classification of hypertension based on recorded medical diagnoses gave a sensitivity of 32.2% for diabetes (95% confidence interval from 30.4 to 34.1%). Prescription of blood pressure lowering drugs in the 12 months before diagnosis gave a sensitivity of 47.2% (45.7 to 48.7%). Combining either a medical diagnosis or a blood pressure lowering prescription gave a sensitivity of 52.8% (51.3 to 54.3%). In family practices where hypertension was least frequently recorded, a diagnosis of hypertension gave a sensitivity of 19.5% for diabetes (17.4 to 21.6%) compared with 50.8% (46.3 to 55.3%) in the highest quintile. Prescription of blood pressure lowering drugs gave a sensitivity of 36.1% (33.1 to 39.0%) in the lowest prescribing practices but 58.2% (55.5 to 61.0%) in the highest quintile. Misclassification errors depend on the definition of hypertension and its implementation in practice. Definitions of hypertension that depend on access or quality in health care should be avoided.

  18. A systematic review of validated methods to capture acute bronchospasm using administrative or claims data.

    Science.gov (United States)

    Sharifi, Mona; Krishanswami, Shanthi; McPheeters, Melissa L

    2013-12-30

    To identify and assess billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify acute bronchospasm in administrative and claims databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to bronchospasm, wheeze and acute asthma. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics. Our searches identified 677 citations of which 38 met our inclusion criteria. In these 38 studies, the most commonly used ICD-9 code was 493.x. Only 3 studies reported any validation methods for the identification of bronchospasm, wheeze or acute asthma in administrative and claims databases; all were among pediatric populations and only 2 offered any validation statistics. Some of the outcome definitions utilized were heterogeneous and included other disease based diagnoses, such as bronchiolitis and pneumonia, which are typically of an infectious etiology. One study offered the validation of algorithms utilizing Emergency Department triage chief complaint codes to diagnose acute asthma exacerbations with ICD-9 786.07 (wheezing) revealing the highest sensitivity (56%), specificity (97%), PPV (93.5%) and NPV (76%). There is a paucity of studies reporting rigorous methods to validate algorithms for the identification of bronchospasm in administrative data. The scant validated data available are limited in their generalizability to broad-based populations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Proposed clinical case definition for cytomegalovirus-immune recovery retinitis.

    Science.gov (United States)

    Ruiz-Cruz, Matilde; Alvarado-de la Barrera, Claudia; Ablanedo-Terrazas, Yuria; Reyes-Terán, Gustavo

    2014-07-15

    Cytomegalovirus (CMV) retinitis has been extensively described in patients with advanced or late human immunodeficiency virus (HIV) disease under ineffective treatment of opportunistic infection and antiretroviral therapy (ART) failure. However, there is limited information about patients who develop active cytomegalovirus retinitis as an immune reconstitution inflammatory syndrome (IRIS) after successful initiation of ART. Therefore, a case definition of cytomegalovirus-immune recovery retinitis (CMV-IRR) is proposed here. We reviewed medical records of 116 HIV-infected patients with CMV retinitis attending our institution during January 2003-June 2012. We retrospectively studied HIV-infected patients who had CMV retinitis on ART initiation or during the subsequent 6 months. Clinical and immunological characteristics of patients with active CMV retinitis were described. Of the 75 patients under successful ART included in the study, 20 had improvement of CMV retinitis. The remaining 55 patients experienced CMV-IRR; 35 of those developed CMV-IRR after ART initiation (unmasking CMV-IRR) and 20 experienced paradoxical clinical worsening of retinitis (paradoxical CMV-IRR). Nineteen patients with CMV-IRR had a CD4 count of ≥50 cells/µL. Six patients with CMV-IRR subsequently developed immune recovery uveitis. There is no case definition for CMV-IRR, although this condition is likely to occur after successful initiation of ART, even in patients with high CD4 T-cell counts. By consequence, we propose the case definitions for paradoxical and unmasking CMV-IRR. We recommend close follow-up of HIV-infected patients following ART initiation. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms

    Science.gov (United States)

    Lopez, Nicolas

    This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.

  1. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems.

    Science.gov (United States)

    D'Onofrio, David J; Abel, David L; Johnson, Donald E

    2012-03-14

    The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  2. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems

    Directory of Open Access Journals (Sweden)

    D'Onofrio David J

    2012-03-01

    Full Text Available Abstract The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI. PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  3. An improved MODIS standard chlorophyll-a algorithm for Malacca Straits Water

    International Nuclear Information System (INIS)

    Lah, N Z Ab; Reba, M N M; Siswanto, Eko

    2014-01-01

    The Malacca Straits has high productivity of nutrients as a result to potential primary production. Yet, the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua has shown an overestimation of Chl-a retrieval in the case-2 water of Malacca Straits. In an update to the previous study, this paper presents the second validation exercise of MODIS OC3M algorithm using the reprocessed MODIS data (R2013) and locally tuned the algorithm with respect to two in-sit stations located at northern and southern part of Malacca Straits. The result shows the OC3M retrieved in the case-2 (south station) water remarkably overestimated in-situ Chl-a, but it is underestimated in the case-1 (north station). Local tuning was employed by iterative regression at the fourth-order polynomial to improve the accuracy of Chl-a retrieval. As a result, locally tuned OC3M algorithm give robust statistical performance and can be applied best for both case-1 and case-2 water in Malacca Straits

  4. Evaluation of the Components of the North Carolina Syndromic Surveillance System Heat Syndrome Case Definition.

    Science.gov (United States)

    Harduar Morano, Laurel; Waller, Anna E

    To improve heat-related illness surveillance, we evaluated and refined North Carolina's heat syndrome case definition. We analyzed North Carolina emergency department (ED) visits during 2012-2014. We evaluated the current heat syndrome case definition (ie, keywords in chief complaint/triage notes or International Classification of Diseases, Ninth Revision, Clinical Modification [ ICD-9-CM] codes) and additional heat-related inclusion and exclusion keywords. We calculated the positive predictive value and sensitivity of keyword-identified ED visits and manually reviewed ED visits to identify true positives and false positives. The current heat syndrome case definition identified 8928 ED visits; additional inclusion keywords identified another 598 ED visits. Of 4006 keyword-identified ED visits, 3216 (80.3%) were captured by 4 phrases: "heat ex" (n = 1674, 41.8%), "overheat" (n = 646, 16.1%), "too hot" (n = 594, 14.8%), and "heatstroke" (n = 302, 7.5%). Among the 267 ED visits identified by keyword only, a burn diagnosis or the following keywords resulted in a false-positive rate >95%: "burn," "grease," "liquid," "oil," "radiator," "antifreeze," "hot tub," "hot spring," and "sauna." After applying the revised inclusion and exclusion criteria, we identified 9132 heat-related ED visits: 2157 by keyword only, 5493 by ICD-9-CM code only, and 1482 by both (sensitivity = 27.0%, positive predictive value = 40.7%). Cases identified by keywords were strongly correlated with cases identified by ICD-9-CM codes (rho = .94, P definition through the use of additional inclusion and exclusion criteria substantially improved the accuracy of the surveillance system. Other jurisdictions may benefit from refining their heat syndrome case definition.

  5. Mapping the EORTC QLQ-C30 onto the EQ-5D-3L: assessing the external validity of existing mapping algorithms.

    Science.gov (United States)

    Doble, Brett; Lorgelly, Paula

    2016-04-01

    To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.

  6. Generalization of Risch's algorithm to special functions

    International Nuclear Information System (INIS)

    Raab, Clemens G.

    2013-05-01

    Symbolic integration deals with the evaluation of integrals in closed form. We present an overview of Risch's algorithm including recent developments. The algorithms discussed are suited for both indefinite and definite integration. They can also be used to compute linear relations among integrals and to find identities for special functions given by parameter integrals. The aim of this presentation is twofold: to introduce the reader to some basic ideas of differential algebra in the context of integration and to raise awareness in the physics community of computer algebra algorithms for indefinite and definite integration.

  7. An Algorithm for Creating Virtual Controls Using Integrated and Harmonized Longitudinal Data.

    Science.gov (United States)

    Hansen, William B; Chen, Shyh-Huei; Saldana, Santiago; Ip, Edward H

    2018-06-01

    We introduce a strategy for creating virtual control groups-cases generated through computer algorithms that, when aggregated, may serve as experimental comparators where live controls are difficult to recruit, such as when programs are widely disseminated and randomization is not feasible. We integrated and harmonized data from eight archived longitudinal adolescent-focused data sets spanning the decades from 1980 to 2010. Collectively, these studies examined numerous psychosocial variables and assessed past 30-day alcohol, cigarette, and marijuana use. Additional treatment and control group data from two archived randomized control trials were used to test the virtual control algorithm. Both randomized controlled trials (RCTs) assessed intentions, normative beliefs, and values as well as past 30-day alcohol, cigarette, and marijuana use. We developed an algorithm that used percentile scores from the integrated data set to create age- and gender-specific latent psychosocial scores. The algorithm matched treatment case observed psychosocial scores at pretest to create a virtual control case that figuratively "matured" based on age-related changes, holding the virtual case's percentile constant. Virtual controls matched treatment case occurrence, eliminating differential attrition as a threat to validity. Virtual case substance use was estimated from the virtual case's latent psychosocial score using logistic regression coefficients derived from analyzing the treatment group. Averaging across virtual cases created group estimates of prevalence. Two criteria were established to evaluate the adequacy of virtual control cases: (1) virtual control group pretest drug prevalence rates should match those of the treatment group and (2) virtual control group patterns of drug prevalence over time should match live controls. The algorithm successfully matched pretest prevalence for both RCTs. Increases in prevalence were observed, although there were discrepancies between live

  8. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  9. Development and validation of a risk prediction algorithm for the recurrence of suicidal ideation among general population with low mood.

    Science.gov (United States)

    Liu, Y; Sareen, J; Bolton, J M; Wang, J L

    2016-03-15

    Suicidal ideation is one of the strongest predictors of recent and future suicide attempt. This study aimed to develop and validate a risk prediction algorithm for the recurrence of suicidal ideation among population with low mood 3035 participants from U.S National Epidemiologic Survey on Alcohol and Related Conditions with suicidal ideation at their lowest mood at baseline were included. The Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria was used. Logistic regression modeling was conducted to derive the algorithm. Discrimination and calibration were assessed in the development and validation cohorts. In the development data, the proportion of recurrent suicidal ideation over 3 years was 19.5 (95% CI: 17.7, 21.5). The developed algorithm consisted of 6 predictors: age, feelings of emptiness, sudden mood changes, self-harm history, depressed mood in the past 4 weeks, interference with social activities in the past 4 weeks because of physical health or emotional problems and emptiness was the most important risk factor. The model had good discriminative power (C statistic=0.8273, 95% CI: 0.8027, 0.8520). The C statistic was 0.8091 (95% CI: 0.7786, 0.8395) in the external validation dataset and was 0.8193 (95% CI: 0.8001, 0.8385) in the combined dataset. This study does not apply to people with suicidal ideation who are not depressed. The developed risk algorithm for predicting the recurrence of suicidal ideation has good discrimination and excellent calibration. Clinicians can use this algorithm to stratify the risk of recurrence in patients and thus improve personalized treatment approaches, make advice and further intensive monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Monte Carlo evaluation of the convolution/superposition algorithm of Hi-Art tomotherapy in heterogeneous phantoms and clinical cases

    Energy Technology Data Exchange (ETDEWEB)

    Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S. [Department of Radiotherapy, Saint-Luc University Hospital, Universite Catholique de Louvain, 10 Avenue Hippocrate, 1200 Brussels (Belgium); Facultat de Fisica (ECM), Universitat de Barcelona, Diagonal 647, 08028 Barcelona (Spain); Tomotherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 and Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Department of Radiotherapy, Saint-Luc University Hospital, Universite Catholique de Louvain, 10 Avenue Hippocrate, 1200 Brussels (Belgium)

    2009-05-15

    The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm{sup 2} field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.

  11. Genetic Algorithms for Case Adaptation

    International Nuclear Information System (INIS)

    Salem, A.M.; Mohamed, A.H.

    2008-01-01

    Case adaptation is the core of case based reasoning (CBR) approach that can modify the past solutions to solve new problems. It generally relies on the knowledge base and heuristics in order to achieve the required changes. It has always been a difficult process to designers within (CBR) cycle. Its difficulties can be referred to the large effort, and computational analysis needed for acquiring the knowledge's domain. To solve these problems, this research explores a new method that applying a genetic algorithm (GA) to CBR adaptation. However, it can decrease the computational complexity of determining the required changes of the problems especially those having a great amount of domain knowledge. besides, it can decrease the required time by dividing the design task into sub tasks those can be solved at the same time. Therefore, the proposed system can he practically applied for solving the complex problems. It can be used to perform a variety of design tasks on a broad set of application domains. However, it has been implemented for the tablet formulation as a domain of application. Proposed system has improved the accuracy performance of the CBR design systems

  12. N-Dimensional LLL Reduction Algorithm with Pivoted Reflection

    Directory of Open Access Journals (Sweden)

    Zhongliang Deng

    2018-01-01

    Full Text Available The Lenstra-Lenstra-Lovász (LLL lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO communication systems and carrier phase positioning in global navigation satellite system (GNSS to solve the integer least squares (ILS problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL, expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm.

  13. Confirmed clinical case of chronic kidney disease of nontraditional causes in agricultural communities in Central America: a case definition for surveillance

    Directory of Open Access Journals (Sweden)

    Alejandro Ferreiro

    Full Text Available ABSTRACT Over the last 20 years, many reports have described an excess of cases of chronic kidney disease (CKD in the Pacific coastal area of Central America, mainly affecting male farmworkers and signaling a serious public health problem. Most of these cases are not associated with traditional risk factors for CKD, such as aging, diabetes mellitus, and hypertension. This CKD of nontraditional causes (CKDnT might be linked to environmental and/or occupational exposure or working conditions, limited access to health services, and poverty. In response to a resolution approved by the Directing Council of the Pan American Health Organization (PAHO in 2013, PAHO, the U.S. Centers for Disease Control and Prevention, and the Latin American Society of Nephrology and Hypertension (SLANH organized a consultation process in order to expand knowledge on the epidemic of CKDnT and to develop appropriate surveillance instruments. The Clinical Working Group from SLANH was put in charge of finding a consensus definition of a confirmed clinical case of CKDnT. The resulting definition establishes mandatory criteria and exclusion criteria necessary for classifying a case of CKDnT. The definition includes a combination of universally accepted definitions of CKD and the main clinical manifestations of CKDnT. Based on the best available evidence, the Clinical Working Group also formulated general recommendations about clinical management that apply to any patient with CKDnT. Adhering to the definition of a confirmed clinical case of CKDnT and implementing it appropriately is expected to be a powerful instrument for understanding the prevalence of the epidemic, evaluating the results of interventions, and promoting appropriate advocacy and planning efforts.

  14. Confirmed clinical case of chronic kidney disease of nontraditional causes in agricultural communities in Central America: a case definition for surveillance.

    Science.gov (United States)

    Ferreiro, Alejandro; Álvarez-Estévez, Guillermo; Cerdas-Calderón, Manuel; Cruz-Trujillo, Zulma; Mena, Elio; Reyes, Marina; Sandoval-Diaz, Mabel; Sánchez-Polo, Vicente; Valdés, Régulo; Ordúnez, Pedro

    2016-11-01

    Over the last 20 years, many reports have described an excess of cases of chronic kidney disease (CKD) in the Pacific coastal area of Central America, mainly affecting male farmworkers and signaling a serious public health problem. Most of these cases are not associated with traditional risk factors for CKD, such as aging, diabetes mellitus, and hypertension. This CKD of nontraditional causes (CKDnT) might be linked to environmental and/or occupational exposure or working conditions, limited access to health services, and poverty. In response to a resolution approved by the Directing Council of the Pan American Health Organization (PAHO) in 2013, PAHO, the U.S. Centers for Disease Control and Prevention, and the Latin American Society of Nephrology and Hypertension (SLANH) organized a consultation process in order to expand knowledge on the epidemic of CKDnT and to develop appropriate surveillance instruments. The Clinical Working Group from SLANH was put in charge of finding a consensus definition of a confirmed clinical case of CKDnT. The resulting definition establishes mandatory criteria and exclusion criteria necessary for classifying a case of CKDnT. The definition includes a combination of universally accepted definitions of CKD and the main clinical manifestations of CKDnT. Based on the best available evidence, the Clinical Working Group also formulated general recommendations about clinical management that apply to any patient with CKDnT. Adhering to the definition of a confirmed clinical case of CKDnT and implementing it appropriately is expected to be a powerful instrument for understanding the prevalence of the epidemic, evaluating the results of interventions, and promoting appropriate advocacy and planning efforts.

  15. On König's root finding algorithms

    DEFF Research Database (Denmark)

    Buff, Xavier; Henriksen, Christian

    2003-01-01

    In this paper, we first recall the definition of a family of root-finding algorithms known as König's algorithms. We establish some local and some global properties of those algorithms. We give a characterization of rational maps which arise as König's methods of polynomials with simple roots. We...

  16. Algorithm of imaging modalities in cases of mandibular fractures

    International Nuclear Information System (INIS)

    Mihailova, H.

    2009-01-01

    Mandibular fracture is the most common bone fracture of maxillo-facial trauma. Up to now the main method for examination of the mandible is radiography. The aim of the issue is to present an algorithm of imaging modalities for investigation of patients in cases of mandibular trauma. It consists of series of X ray techniques and views of the facial skull named mandibulo-facial. This standardizes mandibulo-facial series includes exactly determined four projections done by conventional X ray techniques: posterior-anterior view of skull (PA or AP), oblique view of the left mandible; oblique view of the right mandible; occipito-mental view. Using these four planned radiograms is obligatory for each mandibular trauma. Panoramic X-ray is obligatory in cases of apparatus availability; this abolish only oblique views (left and right). Occipito-mental view of the skull gives anatomically better the coronoid process of the mandible, the zygoma complex, the orbital edges and maxillar sinus than Waters projection. So mandibulo-facial series of four planned radiograms is not only for diagnostic of mandibular fractures, but as a screening of mandibulo-facial trauma too. Thus using algorithm of imaging modalities in cases of mandibular fracture leads to optimization of diagnostic process in patients with mandibular trauma. (author)

  17. On Line Validation Exercise (OLIVE: A Web Based Service for the Validation of Medium Resolution Land Products. Application to FAPAR Products

    Directory of Open Access Journals (Sweden)

    Marie Weiss

    2014-05-01

    Full Text Available The OLIVE (On Line Interactive Validation Exercise platform is dedicated to the validation of global biophysical products such as LAI (Leaf Area Index and FAPAR (Fraction of Absorbed Photosynthetically Active Radiation. It was developed under the framework of the CEOS (Committee on Earth Observation Satellites Land Product Validation (LPV sub-group. OLIVE has three main objectives: (i to provide a consistent and centralized information on the definition of the biophysical variables, as well as a description of the main available products and their performances (ii to provide transparency and traceability by an online validation procedure compliant with the CEOS LPV and QA4EO (Quality Assurance for Earth Observation recommendations (iii and finally, to provide a tool to benchmark new products, update product validation results and host new ground measurement sites for accuracy assessment. The functionalities and algorithms of OLIVE are described to provide full transparency of its procedures to the community. The validation process and typical results are illustrated for three FAPAR products: GEOV1 (VEGETATION sensor, MGVIo (MERIS sensor and MODIS collection 5 FPAR. OLIVE is available on the European Space Agency CAL/VAL portal, including full documentation, validation exercise results, and product extracts.

  18. Definitions and validation criteria for biomarkers and surrogate endpoints: development and testing of a quantitative hierarchical levels of evidence schema

    DEFF Research Database (Denmark)

    Lassere, Marissa N; Johnson, Kent R; Boers, Maarten

    2007-01-01

    endpoints, and leading indicators, a quantitative surrogate validation schema was developed and subsequently evaluated at a stakeholder workshop. RESULTS: The search identified several classification schema and definitions. Components of these were incorporated into a new quantitative surrogate validation...... level of evidence schema that evaluates biomarkers along 4 domains: Target, Study Design, Statistical Strength, and Penalties. Scores derived from 3 domains the Target that the marker is being substituted for, the Design of the (best) evidence, and the Statistical strength are additive. Penalties...... of the National Institutes of Health definitions of biomarker, surrogate endpoint, and clinical endpoint was useful. CONCLUSION: Further development and application of this schema provides incentives and guidance for effective biomarker and surrogate endpoint research, and more efficient drug discovery...

  19. ROBUST-HYBRID GENETIC ALGORITHM FOR A FLOW-SHOP SCHEDULING PROBLEM (A Case Study at PT FSCM Manufacturing Indonesia

    Directory of Open Access Journals (Sweden)

    Johan Soewanda

    2007-01-01

    Full Text Available This paper discusses the application of Robust Hybrid Genetic Algorithm to solve a flow-shop scheduling problem. The proposed algorithm attempted to reach minimum makespan. PT. FSCM Manufacturing Indonesia Plant 4's case was used as a test case to evaluate the performance of the proposed algorithm. The proposed algorithm was compared to Ant Colony, Genetic-Tabu, Hybrid Genetic Algorithm, and the company's algorithm. We found that Robust Hybrid Genetic produces statistically better result than the company's, but the same as Ant Colony, Genetic-Tabu, and Hybrid Genetic. In addition, Robust Hybrid Genetic Algorithm required less computational time than Hybrid Genetic Algorithm

  20. INTEGRATING CASE-BASED REASONING, KNOWLEDGE-BASED APPROACH AND TSP ALGORITHM FOR MINIMUM TOUR FINDING

    Directory of Open Access Journals (Sweden)

    Hossein Erfani

    2009-07-01

    Full Text Available Imagine you have traveled to an unfamiliar city. Before you start your daily tour around the city, you need to know a good route. In Network Theory (NT, this is the traveling salesman problem (TSP. A dynamic programming algorithm is often used for solving this problem. However, when the road network of the city is very complicated and dense, which is usually the case, it will take too long for the algorithm to find the shortest path. Furthermore, in reality, things are not as simple as those stated in AT. For instance, the cost of travel for the same part of the city at different times may not be the same. In this project, we have integrated TSP algorithm with AI knowledge-based approach and case-based reasoning in solving the problem. With this integration, knowledge about the geographical information and past cases are used to help TSP algorithm in finding a solution. This approach dramatically reduces the computation time required for minimum tour finding.

  1. Productivity of Stream Definitions

    NARCIS (Netherlands)

    Endrullis, Jörg; Grabmayer, Clemens; Hendriks, Dimitri; Isihara, Ariya; Klop, Jan

    2007-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continuously in such a way that a uniquely determined stream is obtained as the limit. Whereas productivity is undecidable

  2. Productivity of stream definitions

    NARCIS (Netherlands)

    Endrullis, J.; Grabmayer, C.A.; Hendriks, D.; Isihara, A.; Klop, J.W.

    2008-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continually in such a way that a uniquely determined stream in constructor normal form is obtained as the limit. Whereas

  3. External validation of leukocytosis and neutrophilia as a prognostic marker in anal carcinoma treated with definitive chemoradiation.

    Science.gov (United States)

    Schernberg, Antoine; Huguet, Florence; Moureau-Zabotto, Laurence; Chargari, Cyrus; Rivin Del Campo, Eleonor; Schlienger, Michel; Escande, Alexandre; Touboul, Emmanuel; Deutsch, Eric

    2017-07-01

    To validate the prognostic value of leukocyte disorders in anal squamous cell carcinoma (SCC) patients receiving definitive concurrent chemoradiation. Bi-institutional clinical records from consecutive patients treated between 2001 and 2015 with definitive chemoradiation for anal SCC were retrospectively reviewed. Prognostic value of pretreatment leukocyte disorders was examined, with focus on patterns of relapse and survival. Leukocytosis and neutrophilia were defined as leukocyte or neutrophil count exceeding 10G/L and 7G/L, respectively. We identified 133 patients, treated in two institutions. Eight% and 7% displayed baseline leukocytosis and neutrophilia, respectively. Estimated 3-year overall survival (OS) and progression-free survival (PFS) were 88% and 77%, respectively. In univariate analysis, both leukocytosis and neutrophilia were associated with worse OS, PFS (p<0.01), locoregional control (LRC) and Distant Metastasis Control (DMC) (p<0.05), also after stratification by each institution. In multivariate analysis, leukocytosis and neutrophilia remained as independent risk factors associated with poorer OS, PFS, LRC and DMC (p<0.05). This study validates leukocytosis and neutrophilia as independent prognostic factors in anal SCC patients treated with definitive chemoradiation. Although prospective confirmation is warranted, it is suggested that the leukocyte and neutrophil count parameters are clinically relevant biomarkers to be considered for further clinical investigations. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A Taylor weak-statement algorithm for hyperbolic conservation laws

    Science.gov (United States)

    Baker, A. J.; Kim, J. W.

    1987-01-01

    Finite element analysis, applied to computational fluid dynamics (CFD) problem classes, presents a formal procedure for establishing the ingredients of a discrete approximation numerical solution algorithm. A classical Galerkin weak-statement formulation, formed on a Taylor series extension of the conservation law system, is developed herein that embeds a set of parameters eligible for constraint according to specification of suitable norms. The derived family of Taylor weak statements is shown to contain, as special cases, over one dozen independently derived CFD algorithms published over the past several decades for the high speed flow problem class. A theoretical analysis is completed that facilitates direct qualitative comparisons. Numerical results for definitive linear and nonlinear test problems permit direct quantitative performance comparisons.

  5. Retrieval of Aerosol Microphysical Properties from AERONET Photo-Polarimetric Measurements. 2: A New Research Algorithm and Case Demonstration

    Science.gov (United States)

    Xu, Xiaoguang; Wang, Jun; Zeng, Jing; Spurr, Robert; Liu, Xiong; Dubovik, Oleg; Li, Li; Li, Zhengqiang; Mishchenko, Michael I.; Siniuk, Aliaksandr; hide

    2015-01-01

    A new research algorithm is presented here as the second part of a two-part study to retrieve aerosol microphysical properties from the multispectral and multiangular photopolarimetric measurements taken by Aerosol Robotic Network's (AERONET's) new-generation Sun photometer. The algorithm uses an advanced UNified and Linearized Vector Radiative Transfer Model and incorporates a statistical optimization approach.While the new algorithmhas heritage from AERONET operational inversion algorithm in constraining a priori and retrieval smoothness, it has two new features. First, the new algorithmretrieves the effective radius, effective variance, and total volume of aerosols associated with a continuous bimodal particle size distribution (PSD) function, while the AERONET operational algorithm retrieves aerosol volume over 22 size bins. Second, our algorithm retrieves complex refractive indices for both fine and coarsemodes,while the AERONET operational algorithm assumes a size-independent aerosol refractive index. Mode-resolved refractive indices can improve the estimate of the single-scattering albedo (SSA) for each aerosol mode and thus facilitate the validation of satellite products and chemistry transport models. We applied the algorithm to a suite of real cases over Beijing_RADI site and found that our retrievals are overall consistent with AERONET operational inversions but can offer mode-resolved refractive index and SSA with acceptable accuracy for the aerosol composed by spherical particles. Along with the retrieval using both radiance and polarization, we also performed radiance-only retrieval to demonstrate the improvements by adding polarization in the inversion. Contrast analysis indicates that with polarization, retrieval error can be reduced by over 50% in PSD parameters, 10-30% in the refractive index, and 10-40% in SSA, which is consistent with theoretical analysis presented in the companion paper of this two-part study.

  6. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    Science.gov (United States)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  7. The case for expanding the definition of 'key populations' to include ...

    African Journals Online (AJOL)

    The case for expanding the definition of 'key populations' to include high-risk groups in the general population ... South African Medical Journal ... to formal housing and services, access to higher education, and broad economic transformation.

  8. Natural history of benign prostatic hyperplasia: Appropriate case definition and estimation of its prevalence in the community

    NARCIS (Netherlands)

    J.L.H.R. Bosch (Ruud); W.C.J. Hop (Wim); W.J. Kirkels (Wim); F.H. Schröder (Fritz)

    1995-01-01

    textabstractThere is no consensus about a case definition of benign prostatic hyperplasia (BPH). In the present study, BPH prevalence rates were determined using various case definitions based on a combination of clinical parameters used to describe the properties of BPH: symptoms of prostatism,

  9. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    Science.gov (United States)

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  10. Sampling algorithms for validation of supervised learning models for Ising-like systems

    Science.gov (United States)

    Portman, Nataliya; Tamblyn, Isaac

    2017-12-01

    In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).

  11. Improvements in pencil beam scanning proton therapy dose calculation accuracy in brain tumor cases with a commercial Monte Carlo algorithm.

    Science.gov (United States)

    Widesott, Lamberto; Lorentini, Stefano; Fracchiolla, Francesco; Farace, Paolo; Schwarz, Marco

    2018-05-04

    validation of a commercial Monte Carlo (MC) algorithm (RayStation ver6.0.024) for the treatment of brain tumours with pencil beam scanning (PBS) proton therapy, comparing it via measurements and analytical calculations in clinically realistic scenarios. Methods: For the measurements a 2D ion chamber array detector (MatriXX PT)) was placed underneath the following targets: 1) anthropomorphic head phantom (with two different thickness) and 2) a biological sample (i.e. half lamb's head). In addition, we compared the MC dose engine vs. the RayStation pencil beam (PB) algorithm clinically implemented so far, in critical conditions such as superficial targets (i.e. in need of range shifter), different air gaps and gantry angles to simulate both orthogonal and tangential beam arrangements. For every plan the PB and MC dose calculation were compared to measurements using a gamma analysis metrics (3%, 3mm). Results: regarding the head phantom the gamma passing rate (GPR) was always >96% and on average > 99% for the MC algorithm; PB algorithm had a GPR ≤90% for all the delivery configurations with single slab (apart 95 % GPR from gantry 0° and small air gap) and in case of two slabs of the head phantom the GPR was >95% only in case of small air gaps for all the three (0°, 45°,and 70°) simulated beam gantry angles. Overall the PB algorithm tends to overestimate the dose to the target (up to 25%) and underestimate the dose to the organ at risk (up to 30%). We found similar results (but a bit worse for PB algorithm) for the two targets of the lamb's head where only two beam gantry angles were simulated. Conclusions: our results suggest that in PBS proton therapy range shifter (RS) need to be used with extreme caution when planning the treatment with an analytical algorithm due to potentially great discrepancies between the planned dose and the dose delivered to the patients, also in case of brain tumours where this issue could be underestimated. Our results also

  12. A Multi-Scale Method for Dynamics Simulation in Continuum Solvent Models I: Finite-Difference Algorithm for Navier-Stokes Equation.

    Science.gov (United States)

    Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2014-11-25

    A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.

  13. FMRQ-A Multiagent Reinforcement Learning Algorithm for Fully Cooperative Tasks.

    Science.gov (United States)

    Zhang, Zhen; Zhao, Dongbin; Gao, Junwei; Wang, Dongqing; Dai, Yujie

    2017-06-01

    In this paper, we propose a multiagent reinforcement learning algorithm dealing with fully cooperative tasks. The algorithm is called frequency of the maximum reward Q-learning (FMRQ). FMRQ aims to achieve one of the optimal Nash equilibria so as to optimize the performance index in multiagent systems. The frequency of obtaining the highest global immediate reward instead of immediate reward is used as the reinforcement signal. With FMRQ each agent does not need the observation of the other agents' actions and only shares its state and reward at each step. We validate FMRQ through case studies of repeated games: four cases of two-player two-action and one case of three-player two-action. It is demonstrated that FMRQ can converge to one of the optimal Nash equilibria in these cases. Moreover, comparison experiments on tasks with multiple states and finite steps are conducted. One is box-pushing and the other one is distributed sensor network problem. Experimental results show that the proposed algorithm outperforms others with higher performance.

  14. Definitions and validation criteria for biomarkers and surrogate endpoints: development and testing of a quantitative hierarchical levels of evidence schema

    DEFF Research Database (Denmark)

    Lassere, Marissa N; Johnson, Kent R; Boers, Maarten

    2007-01-01

    endpoints, and leading indicators, a quantitative surrogate validation schema was developed and subsequently evaluated at a stakeholder workshop. RESULTS: The search identified several classification schema and definitions. Components of these were incorporated into a new quantitative surrogate validation...... of the National Institutes of Health definitions of biomarker, surrogate endpoint, and clinical endpoint was useful. CONCLUSION: Further development and application of this schema provides incentives and guidance for effective biomarker and surrogate endpoint research, and more efficient drug discovery...... are then applied if there is serious counterevidence. A total score (0 to 15) determines the level of evidence, with Level 1 the strongest and Level 5 the weakest. It was proposed that the term "surrogate" be restricted to markers attaining Levels 1 or 2 only. Most stakeholders agreed that this operationalization...

  15. Uniform research case definition criteria differentiate tuberculous and bacterial meningitis in children.

    Science.gov (United States)

    Solomons, Regan S; Wessels, Marie; Visser, Douwe H; Donald, Peter R; Marais, Ben J; Schoeman, Johan F; van Furth, Anne M

    2014-12-01

    Tuberculous meningitis (TBM) research is hampered by low numbers of microbiologically confirmed TBM cases and the fact that they may represent a select part of the disease spectrum. A uniform TBM research case definition was developed to address these limitations, but its ability to differentiate TBM from bacterial meningitis has not been evaluated. We assessed all children treated for TBM from 1985 to 2005 at Tygerberg Children's Hospital, Cape Town, South Africa. For comparative purposes, a group of children with culture-confirmed bacterial meningitis, diagnosed between 2003 and 2009, was identified from the National Health Laboratory Service database. The performance of the proposed case definition was evaluated in culture-confirmed TBM and bacterial meningitis cases. Of 554 children treated for TBM, 66 (11.9%) were classified as "definite TBM," 408 (73.6%) as "probable TBM," and 72 (13.0%) as "possible TBM." "Probable TBM" criteria identified culture-confirmed TBM with a sensitivity of 86% and specificity of 100%; sensitivity was increased but specificity reduced when using "possible TBM" criteria (sensitivity 100%, specificity 56%). "Probable TBM" criteria accurately differentiated TBM from bacterial meningitis and could be considered for use in clinical trials; reduced sensitivity in children with early TBM (stage 1 disease) remains a concern. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  17. Surrogate Endpoint Evaluation: Principal Stratification Criteria and the Prentice Definition.

    Science.gov (United States)

    Gilbert, Peter B; Gabriel, Erin E; Huang, Ying; Chan, Ivan S F

    2015-09-01

    A common problem of interest within a randomized clinical trial is the evaluation of an inexpensive response endpoint as a valid surrogate endpoint for a clinical endpoint, where a chief purpose of a valid surrogate is to provide a way to make correct inferences on clinical treatment effects in future studies without needing to collect the clinical endpoint data. Within the principal stratification framework for addressing this problem based on data from a single randomized clinical efficacy trial, a variety of definitions and criteria for a good surrogate endpoint have been proposed, all based on or closely related to the "principal effects" or "causal effect predictiveness (CEP)" surface. We discuss CEP-based criteria for a useful surrogate endpoint, including (1) the meaning and relative importance of proposed criteria including average causal necessity (ACN), average causal sufficiency (ACS), and large clinical effect modification; (2) the relationship between these criteria and the Prentice definition of a valid surrogate endpoint; and (3) the relationship between these criteria and the consistency criterion (i.e., assurance against the "surrogate paradox"). This includes the result that ACN plus a strong version of ACS generally do not imply the Prentice definition nor the consistency criterion, but they do have these implications in special cases. Moreover, the converse does not hold except in a special case with a binary candidate surrogate. The results highlight that assumptions about the treatment effect on the clinical endpoint before the candidate surrogate is measured are influential for the ability to draw conclusions about the Prentice definition or consistency. In addition, we emphasize that in some scenarios that occur commonly in practice, the principal strata sub-populations for inference are identifiable from the observable data, in which cases the principal stratification framework has relatively high utility for the purpose of effect

  18. Surrogate Endpoint Evaluation: Principal Stratification Criteria and the Prentice Definition

    Science.gov (United States)

    Gilbert, Peter B.; Gabriel, Erin E.; Huang, Ying; Chan, Ivan S.F.

    2015-01-01

    A common problem of interest within a randomized clinical trial is the evaluation of an inexpensive response endpoint as a valid surrogate endpoint for a clinical endpoint, where a chief purpose of a valid surrogate is to provide a way to make correct inferences on clinical treatment effects in future studies without needing to collect the clinical endpoint data. Within the principal stratification framework for addressing this problem based on data from a single randomized clinical efficacy trial, a variety of definitions and criteria for a good surrogate endpoint have been proposed, all based on or closely related to the “principal effects” or “causal effect predictiveness (CEP)” surface. We discuss CEP-based criteria for a useful surrogate endpoint, including (1) the meaning and relative importance of proposed criteria including average causal necessity (ACN), average causal sufficiency (ACS), and large clinical effect modification; (2) the relationship between these criteria and the Prentice definition of a valid surrogate endpoint; and (3) the relationship between these criteria and the consistency criterion (i.e., assurance against the “surrogate paradox”). This includes the result that ACN plus a strong version of ACS generally do not imply the Prentice definition nor the consistency criterion, but they do have these implications in special cases. Moreover, the converse does not hold except in a special case with a binary candidate surrogate. The results highlight that assumptions about the treatment effect on the clinical endpoint before the candidate surrogate is measured are influential for the ability to draw conclusions about the Prentice definition or consistency. In addition, we emphasize that in some scenarios that occur commonly in practice, the principal strata sub-populations for inference are identifiable from the observable data, in which cases the principal stratification framework has relatively high utility for the purpose of

  19. Validation of new 3D post processing algorithm for improved maximum intensity projections of MR angiography acquisitions in the brain

    Energy Technology Data Exchange (ETDEWEB)

    Bosmans, H; Verbeeck, R; Vandermeulen, D; Suetens, P; Wilms, G; Maaly, M; Marchal, G; Baert, A L [Louvain Univ. (Belgium)

    1995-12-01

    The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final `background region` whereas cortical blood vessels and all brain tissues are included in the `brain region`. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms.

  20. Validation of new 3D post processing algorithm for improved maximum intensity projections of MR angiography acquisitions in the brain

    International Nuclear Information System (INIS)

    Bosmans, H.; Verbeeck, R.; Vandermeulen, D.; Suetens, P.; Wilms, G.; Maaly, M.; Marchal, G.; Baert, A.L.

    1995-01-01

    The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final 'background region' whereas cortical blood vessels and all brain tissues are included in the 'brain region'. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms

  1. Validation of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm

    Science.gov (United States)

    Lawson, Sara Nicole; Zaluski, Neal; Petrie, Amanda; Arnold, Cathy; Basran, Jenny

    2013-01-01

    ABSTRACT Purpose: To investigate the concurrent validity of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm (FSRA). Method: A total of 29 older adults (mean age 77.7 [SD 4.0] y) residing in an independent-living senior's complex who met inclusion criteria completed a demographic questionnaire and the components of the FSRA and Berg Balance Scale (BBS). The FSRA consists of the Elderly Fall Screening Test (EFST) and the Multi-factor Falls Questionnaire (MFQ); it is designed to categorize individuals into low, moderate, or high fall-risk categories to determine appropriate management pathways. A predictive model for probability of fall risk, based on previous research, was used to determine concurrent validity of the FSRI. Results: The FSRA placed 79% of participants into the low-risk category, whereas the predictive model found the probability of fall risk to range from 0.04 to 0.74, with a mean of 0.35 (SD 0.25). No statistically significant correlation was found between the FSRA and the predictive model for probability of fall risk (Spearman's ρ=0.35, p=0.06). Conclusion: The FSRA lacks concurrent validity relative to to a previously established model of fall risk and appears to over-categorize individuals into the low-risk group. Further research on the FSRA as an adequate tool to screen community-dwelling older adults for fall risk is recommended. PMID:24381379

  2. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    Science.gov (United States)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  3. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  4. Research on Bridge Sensor Validation Based on Correlation in Cluster

    Directory of Open Access Journals (Sweden)

    Huang Xiaowei

    2016-01-01

    Full Text Available In order to avoid the false alarm and alarm failure caused by sensor malfunction or failure, it has been critical to diagnose the fault and analyze the failure of the sensor measuring system in major infrastructures. Based on the real time monitoring of bridges and the study on the correlation probability distribution between multisensors adopted in the fault diagnosis system, a clustering algorithm based on k-medoid is proposed, by dividing sensors of the same type into k clusters. Meanwhile, the value of k is optimized by a specially designed evaluation function. Along with the further study of the correlation of sensors within the same cluster, this paper presents the definition and corresponding calculation algorithm of the sensor’s validation. The algorithm is applied to the analysis of the sensor data from an actual health monitoring system. The result reveals that the algorithm can not only accurately measure the failure degree and orientate the malfunction in time domain but also quantitatively evaluate the performance of sensors and eliminate error of diagnosis caused by the failure of the reference sensor.

  5. LAI inversion algorithm based on directional reflectance kernels.

    Science.gov (United States)

    Tang, S; Chen, J M; Zhu, Q; Li, X; Chen, M; Sun, R; Zhou, Y; Deng, F; Xie, D

    2007-11-01

    Leaf area index (LAI) is an important ecological and environmental parameter. A new LAI algorithm is developed using the principles of ground LAI measurements based on canopy gap fraction. First, the relationship between LAI and gap fraction at various zenith angles is derived from the definition of LAI. Then, the directional gap fraction is acquired from a remote sensing bidirectional reflectance distribution function (BRDF) product. This acquisition is obtained by using a kernel driven model and a large-scale directional gap fraction algorithm. The algorithm has been applied to estimate a LAI distribution in China in mid-July 2002. The ground data acquired from two field experiments in Changbai Mountain and Qilian Mountain were used to validate the algorithm. To resolve the scale discrepancy between high resolution ground observations and low resolution remote sensing data, two TM images with a resolution approaching the size of ground plots were used to relate the coarse resolution LAI map to ground measurements. First, an empirical relationship between the measured LAI and a vegetation index was established. Next, a high resolution LAI map was generated using the relationship. The LAI value of a low resolution pixel was calculated from the area-weighted sum of high resolution LAIs composing the low resolution pixel. The results of this comparison showed that the inversion algorithm has an accuracy of 82%. Factors that may influence the accuracy are also discussed in this paper.

  6. Validation of new CFD release by Ground-Coupled Heat Transfer Test Cases

    Directory of Open Access Journals (Sweden)

    Sehnalek Stanislav

    2017-01-01

    Full Text Available In this article is presented validation of ANSYS Fluent with IEA BESTEST Task 34. Article stars with outlook to the topic, afterward are described steady-state cases used for validation. Thereafter is mentioned implementation of these cases on CFD. Article is concluded with presentation of the simulated results with a comparison of those from already validated simulation software by IEA. These validation shows high correlation with an older version of tested ANSYS as well as with other main software. The paper ends by discussion with an outline of future research.

  7. Algorithm Development and Validation for Satellite-Derived Distributions of DOC and CDOM in the US Middle Atlantic Bight

    Science.gov (United States)

    Mannino, Antonio; Russ, Mary E.; Hooker, Stanford B.

    2007-01-01

    In coastal ocean waters, distributions of dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) vary seasonally and interannually due to multiple source inputs and removal processes. We conducted several oceanographic cruises within the continental margin of the U.S. Middle Atlantic Bight (MAB) to collect field measurements in order to develop algorithms to retrieve CDOM and DOC from NASA's MODIS-Aqua and SeaWiFS satellite sensors. In order to develop empirical algorithms for CDOM and DOC, we correlated the CDOM absorption coefficient (a(sub cdom)) with in situ radiometry (remote sensing reflectance, Rrs, band ratios) and then correlated DOC to Rrs band ratios through the CDOM to DOC relationships. Our validation analyses demonstrate successful retrieval of DOC and CDOM from coastal ocean waters using the MODIS-Aqua and SeaWiFS satellite sensors with mean absolute percent differences from field measurements of cdom)(355)1,6 % for a(sub cdom)(443), and 12% for the CDOM spectral slope. To our knowledge, the algorithms presented here represent the first validated algorithms for satellite retrieval of a(sub cdom) DOC, and CDOM spectral slope in the coastal ocean. The satellite-derived DOC and a(sub cdom) products demonstrate the seasonal net ecosystem production of DOC and photooxidation of CDOM from spring to fall. With accurate satellite retrievals of CDOM and DOC, we will be able to apply satellite observations to investigate interannual and decadal-scale variability in surface CDOM and DOC within continental margins and monitor impacts of climate change and anthropogenic activities on coastal ecosystems.

  8. Optimization of vitamin K antagonist drug dose finding by replacement of the international normalized ratio by a bidirectional factor : validation of a new algorithm

    NARCIS (Netherlands)

    Beinema, M J; van der Meer, F J M; Brouwers, J R B J; Rosendaal, F R

    2016-01-01

    UNLABELLED: Essentials We developed a new algorithm to optimize vitamin K antagonist dose finding. Validation was by comparing actual dosing to algorithm predictions. Predicted and actual dosing of well performing centers were highly associated. The method is promising and should be tested in a

  9. Monte Carlo tests of the ELIPGRID-PC algorithm

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM reg-sign PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within ±0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error

  10. A two-domain real-time algorithm for optimal data reduction: a case study on accelerator magnet measurements

    International Nuclear Information System (INIS)

    Arpaia, Pasquale; Buzio, Marco; Inglese, Vitaliano

    2010-01-01

    A real-time algorithm of data reduction, based on the combination of two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by the post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement systems for testing magnets of the Large Hadron Collider at the European Organization for Nuclear Research (CERN) are reported

  11. Case-finding for common mental disorders of anxiety and depression in primary care: an external validation of routinely collected data.

    Science.gov (United States)

    John, Ann; McGregor, Joanne; Fone, David; Dunstan, Frank; Cornish, Rosie; Lyons, Ronan A; Lloyd, Keith R

    2016-03-15

    The robustness of epidemiological research using routinely collected primary care electronic data to support policy and practice for common mental disorders (CMD) anxiety and depression would be greatly enhanced by appropriate validation of diagnostic codes and algorithms for data extraction. We aimed to create a robust research platform for CMD using population-based, routinely collected primary care electronic data. We developed a set of Read code lists (diagnosis, symptoms, treatments) for the identification of anxiety and depression in the General Practice Database (GPD) within the Secure Anonymised Information Linkage Databank at Swansea University, and assessed 12 algorithms for Read codes to define cases according to various criteria. Annual incidence rates were calculated per 1000 person years at risk (PYAR) to assess recording practice for these CMD between January 1(st) 2000 and December 31(st) 2009. We anonymously linked the 2799 MHI-5 Caerphilly Health and Social Needs Survey (CHSNS) respondents aged 18 to 74 years to their routinely collected GP data in SAIL. We estimated the sensitivity, specificity and positive predictive value of the various algorithms using the MHI-5 as the gold standard. The incidence of combined depression/anxiety diagnoses remained stable over the ten-year period in a population of over 500,000 but symptoms increased from 6.5 to 20.7 per 1000 PYAR. A 'historical' GP diagnosis for depression/anxiety currently treated plus a current diagnosis (treated or untreated) resulted in a specificity of 0.96, sensitivity 0.29 and PPV 0.76. Adding current symptom codes improved sensitivity (0.32) with a marginal effect on specificity (0.95) and PPV (0.74). We have developed an algorithm with a high specificity and PPV of detecting cases of anxiety and depression from routine GP data that incorporates symptom codes to reflect GP coding behaviour. We have demonstrated that using diagnosis and current treatment alone to identify cases for

  12. 76 FR 38051 - Defense Federal Acquisition Regulation Supplement; Definition of Sexual Assault (DFARS Case 2010...

    Science.gov (United States)

    2011-06-29

    ... Sexual Assault/Harassment Involving DoD Contractors During Contingency Operations,'' dated April 16, 2010... Federal Acquisition Regulation Supplement; Definition of Sexual Assault (DFARS Case 2010-D023) AGENCY... employees accompanying U.S. Armed Forces are made aware of the DoD definition of sexual assault as defined...

  13. Evaluation of amplitude-based sorting algorithm to reduce lung tumor blurring in PET images using 4D NCAT phantom.

    Science.gov (United States)

    Wang, Jiali; Byrne, James; Franquiz, Juan; McGoron, Anthony

    2007-08-01

    develop and validate a PET sorting algorithm based on the respiratory amplitude to correct for abnormal respiratory cycles. using the 4D NCAT phantom model, 3D PET images were simulated in lung and other structures at different times within a respiratory cycle and noise was added. To validate the amplitude binning algorithm, NCAT phantom was used to simulate one case of five different respiratory periods and another case of five respiratory periods alone with five respiratory amplitudes. Comparison was performed for gated and un-gated images and for the new amplitude binning algorithm with the time binning algorithm by calculating the mean number of counts in the ROI (region of interest). an average of 8.87+/-5.10% improvement was reported for total 16 tumors with different tumor sizes and different T/B (tumor to background) ratios using the new sorting algorithm. As both the T/B ratio and tumor size decreases, image degradation due to respiration increases. The greater benefit for smaller diameter tumor and lower T/B ratio indicates a potential improvement in detecting more problematic tumors.

  14. An analytic parton shower. Algorithms, implementation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Sebastian

    2012-06-15

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  15. An analytic parton shower. Algorithms, implementation and validation

    International Nuclear Information System (INIS)

    Schmidt, Sebastian

    2012-06-01

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  16. A Human Proximity Operations System test case validation approach

    Science.gov (United States)

    Huber, Justin; Straub, Jeremy

    A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

  17. Diagnosis of measles by clinical case definition in dengue-endemic areas: implications for measles surveillance and control.

    OpenAIRE

    Dietz, V. J.; Nieburg, P.; Gubler, D. J.; Gomez, I.

    1992-01-01

    In many countries, measles surveillance relies heavily on the use of a standard clinical case definition; however, the clinical signs and symptoms of measles are similar to those of dengue. For example, during 1985, in Puerto Rico, 22 (23%) of 94 cases of illnesses with rashes that met the measles clinical case definition were serologically confirmed as measles, but 32 (34%) others were serologically confirmed as dengue. Retrospective analysis at the San Juan Laboratories of the Centers for D...

  18. Comparing yield and relative costs of WHO TB screening algorithms in selected risk groups among people aged 65 years and over in China, 2013.

    Directory of Open Access Journals (Sweden)

    Canyou Zhang

    Full Text Available To calculate the yield and cost per diagnosed tuberculosis (TB case for three World Health Organization screening algorithms and one using the Chinese National TB program (NTP TB suspect definitions, using data from a TB prevalence survey of people aged 65 years and over in China, 2013.This was an analytic study using data from the above survey. Risk groups were defined and the prevalence of new TB cases in each group calculated. Costs of each screening component were used to give indicative costs per case detected. Yield, number needed to screen (NNS and cost per case were used to assess the algorithms.The prevalence survey identified 172 new TB cases in 34,250 participants. Prevalence varied greatly in different groups, from 131/100,000 to 4651/ 100,000. Two groups were chosen to compare the algorithms. The medium-risk group (living in a rural area: men, or previous TB case, or close contact or a BMI <18.5, or tobacco user had appreciably higher cost per case (USD 221, 298 and 963 in the three algorithms than the high-risk group (all previous TB cases, all close contacts. (USD 72, 108 and 309 but detected two to four times more TB cases in the population. Using a Chest x-ray as the initial screening tool in the medium risk group cost the most (USD 963, and detected 67% of all the new cases. Using the NTP definition of TB suspects made little difference.To "End TB", many more TB cases have to be identified. Screening only the highest risk groups identified under 14% of the undetected cases,. To "End TB", medium risk groups will need to be screened. Using a CXR for initial screening results in a much higher yield, at what should be an acceptable cost.

  19. 75 FR 73997 - Defense Federal Acquisition Regulation Supplement; Definition of Sexual Assault (DFARS Case 2010...

    Science.gov (United States)

    2010-11-30

    ... Inspector General audit D-2010-052, entitled ``Efforts to Prevent Sexual Assault/Harassment Involving DoD... Acquisition Regulation Supplement; Definition of Sexual Assault (DFARS Case 2010-D023) AGENCY: Defense..., to ensure contractor employees are aware of the DoD definition of ``sexual assault'' as defined in Do...

  20. Soil moisture mapping using Sentinel 1 images: the proposed approach and its preliminary validation carried out in view of an operational product

    Science.gov (United States)

    Paloscia, S.; Pettinato, S.; Santi, E.; Pierdicca, N.; Pulvirenti, L.; Notarnicola, C.; Pace, G.; Reppucci, A.

    2011-11-01

    The main objective of this research is to develop, test and validate a soil moisture (SMC)) algorithm for the GMES Sentinel-1 characteristics, within the framework of an ESA project. The SMC product, to be generated from Sentinel-1 data, requires an algorithm able to process operationally in near-real-time and deliver the product to the GMES services within 3 hours from observations. Two different complementary approaches have been proposed: an Artificial Neural Network (ANN), which represented the best compromise between retrieval accuracy and processing time, thus allowing compliance with the timeliness requirements and a Bayesian Multi-temporal approach, allowing an increase of the retrieval accuracy, especially in case where little ancillary data are available, at the cost of computational efficiency, taking advantage of the frequent revisit time achieved by Sentinel-1. The algorithm was validated in several test areas in Italy, US and Australia, and finally in Spain with a 'blind' validation. The Multi-temporal Bayesian algorithm was validated in Central Italy. The validation results are in all cases very much in line with the requirements. However, the blind validation results were penalized by the availability of only VV polarization SAR images and MODIS lowresolution NDVI, although the RMS is slightly > 4%.

  1. Puffy skin disease (PSD) in rainbow trout, Oncorhynchus mykiss (Walbaum): a case definition.

    Science.gov (United States)

    Maddocks, C E; Nolan, E T; Feist, S W; Crumlish, M; Richards, R H; Williams, C F

    2015-07-01

    Puffy skin disease (PSD) is a disease that causes skin pathology in rainbow trout, Oncorhynchus mykiss (Walbaum). Incidence of PSD in UK fish farms and fisheries has increased sharply in the last decade, with growing concern from both industry sectors. This paper provides the first comprehensive case definition of PSD, combining clinical and pathological observations of diseased rainbow trout from both fish farms and fisheries. The defining features of PSD, as summarized in the case definition, were focal lateral flank skin lesions that appeared as cutaneous swelling with pigment loss and petechiae. These were associated with lethargy, poor body condition, inappetance and low level mortality. Epidermal hyperplasia and spongiosis, oedema of the dermis stratum spongiosum and a mild diffuse inflammatory cellularity were typical in histopathology of skin. A specific pathogen or aetiology was not identified. Prevalence and severity of skin lesions was greatest during late summer and autumn, with the highest prevalence being 95%. Atypical lesions seen in winter and spring were suggestive of clinical resolution. PSD holds important implications for both trout aquaculture and still water trout fisheries. This case definition will aid future diagnosis, help avoid confusion with other skin conditions and promote prompt and consistent reporting. © 2014 John Wiley & Sons Ltd.

  2. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    Science.gov (United States)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V

  3. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

    Science.gov (United States)

    Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R

    2016-12-13

    Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0

  4. Hantavirus pulmonary syndrome clinical findings: evaluating a surveillance case definition.

    Science.gov (United States)

    Knust, Barbara; Macneil, Adam; Rollin, Pierre E

    2012-05-01

    Clinical cases of hantavirus pulmonary syndrome (HPS) can be challenging to differentiate from other acute respiratory diseases, which can lead to delays in diagnosis, treatment, and disease reporting. Rapid onset of severe disease occurs, at times before diagnostic test results are available. This study's objective was to examine the clinical characteristics of patients that would indicate HPS to aid in detection and reporting. Test results of blood samples from U.S. patients suspected of having HPS submitted to the Centers for Disease Control and Prevention from 1998-2010 were reviewed. Patient information collected by case report forms was compared between HPS-confirmed and test-negative patients. Diagnostic sensitivity, specificity, predictive values, and likelihood ratios were calculated for individual clinical findings and combinations of variables. Of 567 patients included, 36% were HPS-confirmed. Thrombocytopenia, chest x-rays with suggestive signs, and receiving supplemental oxygenation were highly sensitive (>95%), while elevated hematocrit was highly specific (83%) in detecting HPS. Combinations that maximized sensitivity required the presence of thrombocytopenia. Using a national sample of suspect patients, we found that thrombocytopenia was a highly sensitive indicator of HPS and should be included in surveillance definitions for suspected HPS. Using a sensitive suspect case definition to identify potential HPS patients that are confirmed by highly specific diagnostic testing will ensure accurate reporting of this disease.

  5. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  6. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  7. Retrieval Accuracy Assessment with Gap Detection for Case 2 Waters Chla Algorithms

    Science.gov (United States)

    Salem, S. I.; Higa, H.; Kim, H.; Oki, K.; Oki, T.

    2016-12-01

    Inland lakes and coastal regions types of Case 2 Waters should be continuously and accurately monitored as the former contain 90% of the global liquid freshwater storage, while the latter provide most of the dissolved organic carbon (DOC) which is an important link in the global carbon cycle. The optical properties of Case 2 Waters are dominated by three optically active components: phytoplankton, non-algal particles (NAP) and color dissolved organic matter (CDOM). During the last three decades, researchers have proposed several algorithms to retrieve Chla concentration from the remote sensing reflectance. In this study, seven algorithms are assessed with various band combinations from multi and hyper-spectral data with linear, polynomial and power regression approaches. To evaluate the performance of the 43 algorithm combination sets, 500,000 remote sensing reflectance spectra are simulated with a wide range of concentrations for Chla, NAP and CDOM. The concentrations of Chla and NAP vary from 1-200 (mg m-3) and 1-200 (gm m-3), respectively, and the absorption of CDOM at 440 nm has the range of 0.1-10 (m-1). It is found that the three-band algorithm (665, 709 and 754 nm) with the quadratic polynomial (3b_665_QP) indicates the best overall performance. 3b_665_QP has the least error with a root mean square error (RMSE) of 0.2 (mg m-3) and a mean absolute relative error (MARE) of 0.7 %. The less accurate retrieval of Chla was obtained by the synthetic chlorophyll index algorithm with RMSE and MARE of 35.8 mg m-3 and 160.4 %, respectively. In general, Chla algorithms which incorporates 665 nm band or band tuning technique performs better than those with 680 nm. In addition, the retrieval accuracy of Chla algorithms with quadratic polynomial and power regression approaches are consistently better than the linear ones. By analyzing Chla versus NAP concentrations, the 3b_665_QP outperforms the other algorithms for all Chla concentrations and NAP concentrations above 40

  8. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  9. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms: validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip T.; Cronk, Heather Q.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert Y.; Fisher, Brenden; Osterman, Gregory B.; Pollock, Randy H.; Crisp, David; Eldering, Annmarie; Gunson, Michael R.

    2016-03-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols, i.e., contamination, within the instrument's field of view. Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 µm O2 A band, neglecting scattering by clouds and aerosols, which introduce photon path-length differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 µm (weak CO2 band) and 2.06 µm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which are sensitive to different features in the spectra, provides the basis for cloud screening of the OCO-2 data set.To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning of algorithmic threshold parameters that allows for processing of ≃ 20-25 % of all OCO-2 soundings

  10. Measuring elimination of podoconiosis, endemicity classifications, case definition and targets: an international Delphi exercise.

    Science.gov (United States)

    Deribe, Kebede; Wanji, Samuel; Shafi, Oumer; Muheki Tukahebwa, Edridah; Umulisa, Irenee; Davey, Gail

    2015-09-01

    Podoconiosis is one of the major causes of lymphoedema in the tropics. Nonetheless, currently there are no endemicity classifications or elimination targets to monitor the effects of interventions. This study aimed at establishing case definitions and indicators that can be used to assess endemicity, elimination and clinical outcomes of podoconiosis. This paper describes the result of a Delphi technique used among 28 experts. A questionnaire outlining possible case definitions, endemicity classifications, elimination targets and clinical outcomes was developed. The questionnaire was distributed to experts working on podoconiosis and other neglected tropical diseases in two rounds. The experts rated the importance of case definitions, endemic classifications, elimination targets and the clinical outcome measures. Median and mode were used to describe the central tendency of expert responses. The coefficient of variation was used to describe the dispersals of expert responses. Consensus on definitions and indicators for assessing endemicity, elimination and clinical outcomes of podoconiosis directed at policy makers and health workers was achieved following the two rounds of Delphi approach among the experts. Based on the two Delphi rounds we discuss potential indicators and endemicity classification of this disabling disease, and the ongoing challenges to its elimination in countries with the highest prevalence. Consensus will help to increase effectiveness of podoconiosis elimination efforts and ensure comparability of outcome data. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.

  11. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms; validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, T. E.; O'Dell, C. W.; Frankenberg, C.; Partain, P.; Cronk, H. Q.; Savtchenko, A.; Nelson, R. R.; Rosenthal, E. J.; Chang, A. Y.; Fisher, B.; Osterman, G.; Pollock, R. H.; Crisp, D.; Eldering, A.; Gunson, M. R.

    2015-12-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols within the instrument's field of view (FOV). Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 μm O2 A-band, neglecting scattering by clouds and aerosols, which introduce photon path-length (PPL) differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 μm (weak CO2 band) and 2.06 μm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which key off of different features in the spectra, provides the basis for cloud screening of the OCO-2 data set. To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning to allow throughputs of ≃ 30 %, agreement between the OCO-2 and MODIS cloud screening methods is found to be

  12. Expert system validation in prolog

    Science.gov (United States)

    Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline

    1988-01-01

    An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.

  13. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Optimization of C4.5 algorithm-based particle swarm optimization for breast cancer diagnosis

    Science.gov (United States)

    Muslim, M. A.; Rukmana, S. H.; Sugiharti, E.; Prasetiyo, B.; Alimah, S.

    2018-03-01

    Data mining has become a basic methodology for computational applications in the field of medical domains. Data mining can be applied in the health field such as for diagnosis of breast cancer, heart disease, diabetes and others. Breast cancer is most common in women, with more than one million cases and nearly 600,000 deaths occurring worldwide each year. The most effective way to reduce breast cancer deaths was by early diagnosis. This study aims to determine the level of breast cancer diagnosis. This research data uses Wisconsin Breast Cancer dataset (WBC) from UCI machine learning. The method used in this research is the algorithm C4.5 and Particle Swarm Optimization (PSO) as a feature option and to optimize the algorithm. C4.5. Ten-fold cross-validation is used as a validation method and a confusion matrix. The result of this research is C4.5 algorithm. The particle swarm optimization C4.5 algorithm has increased by 0.88%.

  15. Optimization of the GSFC TROPOZ DIAL retrieval using synthetic lidar returns and ozonesondes - Part 1: Algorithm validation

    Science.gov (United States)

    Sullivan, J. T.; McGee, T. J.; Leblanc, T.; Sumnicht, G. K.; Twigg, L. W.

    2015-10-01

    The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL) is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical ozone concentrations and ozone layers aloft, especially during air quality episodes. For these reasons, this paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and confirm that it is properly representing ozone concentrations. This paper is focused on ensuring the TROPOZ algorithm is properly quantifying ozone concentrations, and a following paper will focus on a systematic uncertainty analysis. This methodology begins by simulating synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile. This was then systematically performed to identify any areas that need refinement for a new operational version of the TROPOZ retrieval algorithm. One immediate outcome of this exercise was that a bin registration error in the correction for detector saturation within the original retrieval was discovered and was subsequently corrected for. Another noticeable outcome was that the vertical smoothing in the retrieval algorithm was upgraded from a constant vertical resolution to a variable vertical resolution to yield a statistical uncertainty of exercise was quite successful.

  16. An algorithm of local earthquake detection from digital records

    Directory of Open Access Journals (Sweden)

    A. PROZOROV

    1978-06-01

    Full Text Available The problem of automatical detection of earthquake signals in seismograms
    and definition of first arrivals of p and s waves is considered.
    The algorithm is based on the analysis of t(A function which represents
    the time of first appearence of a number of going one after another
    swings of amplitudes greather than A in seismic signals. It allows to explore
    such common features of seismograms of earthquakes as sudden
    first p-arrivals of amplitude greater than general amplitude of noise and
    after the definite interval of time before s-arrival the amplitude of which
    overcomes the amplitude of p-arrival. The method was applied to
    3-channel recods of Friuli aftershocks, ¿'-arrivals were defined correctly
    in all cases; p-arrivals were defined in most cases using strict criteria of
    detection. Any false signals were not detected. All p-arrivals were defined
    using soft criteria of detection but less reliability and two false events
    were obtained.

  17. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  18. The CARPEDIEM Algorithm: A Rule-Based System for Identifying Heart Failure Phenotype with a Precision Public Health Approach

    Directory of Open Access Journals (Sweden)

    Michela Franchini

    2018-01-01

    Full Text Available Modern medicine remains dependent on the accurate evaluation of a patient’s health state, recognizing that disease is a process that evolves over time and interacts with many factors unique to that patient. The CARPEDIEM project represents a concrete attempt to address these issues by developing reproducible algorithms to support the accuracy in detection of complex diseases. This study aims to establish and validate the CARPEDIEM approach and algorithm for identifying those patients presenting with or at risk of heart failure (HF by studying 153,393 subjects in Italy, based on patient information flow databases and is not reliant on the electronic health record to accomplish its goals. The resulting algorithm has been validated in a two-stage process, comparing predicted results with (1 HF diagnosis as identified by general practitioners (GPs among the reference cohort and (2 HF diagnosis as identified by cardiologists within a randomly sampled subpopulation of 389 patients. The sources of data used to detect HF cases are numerous and were standardized for this study. The accuracy and the predictive values of the algorithm with respect to the GPs and the clinical standards are highly consistent with those from previous studies. In particular, the algorithm is more efficient in detecting the more severe cases of HF according to the GPs’ validation (specificity increases according to the number of comorbidities and external validation (NYHA: II–IV; HF severity index: 2, 3. Positive and negative predictive values reveal that the CARPEDIEM algorithm is most consistent with clinical evaluation performed in the specialist setting, while it presents a greater ability to rule out false-negative HF cases within the GP practice, probably as a consequence of the different HF prevalence in the two different care settings. Further development includes analyzing the clinical features of false-positive and -negative predictions, to explore the natural

  19. A rule-based electronic phenotyping algorithm for detecting clinically relevant cardiovascular disease cases.

    Science.gov (United States)

    Esteban, Santiago; Rodríguez Tablado, Manuel; Ricci, Ricardo Ignacio; Terrasa, Sergio; Kopitowski, Karin

    2017-07-14

    VD but without CeVD; (3) Patients with CeVD but without disease CaVD; (4) Patients with both diseases. To facilitate the validation process, a stratified sample was taken so that each of the groups represented approximately 25% of the sample. Manual chart review was used as the gold standard for assessing the algorithm's performance. One-third of the patients were assigned randomly to each reviewer (Cohen's kappa 0.91). Both coded and un-coded (free text) sections of the EMR were reviewed. This was done from the first present clinical note in the patients chart to the last one registered prior to 1/1/2005. The performance of the algorithm was compared against manual chart review. It yielded high sensitivity (0.99, 95% CI 0.938-0.9971) and acceptable specificity (0.86, 95% CI 0.818-0.895) for detecting cases of CaVD and CeVD combined. A qualitative analysis of the false positives and false negatives was performed. We developed a simple algorithm, using only standardized and non-standardized coded terms within an EMR that can properly detect clinically relevant events and symptoms of CaVD and CeVD. We believe that combining it with an analysis of the free text using an NLP approach would yield even better results.

  20. Improved algorithm for solving nonlinear parabolized stability equations

    International Nuclear Information System (INIS)

    Zhao Lei; Zhang Cun-bo; Liu Jian-xin; Luo Ji-sheng

    2016-01-01

    Due to its high computational efficiency and ability to consider nonparallel and nonlinear effects, nonlinear parabolized stability equations (NPSE) approach has been widely used to study the stability and transition mechanisms. However, it often diverges in hypersonic boundary layers when the amplitude of disturbance reaches a certain level. In this study, an improved algorithm for solving NPSE is developed. In this algorithm, the mean flow distortion is included into the linear operator instead of into the nonlinear forcing terms in NPSE. An under-relaxation factor for computing the nonlinear terms is introduced during the iteration process to guarantee the robustness of the algorithm. Two case studies, the nonlinear development of stationary crossflow vortices and the fundamental resonance of the second mode disturbance in hypersonic boundary layers, are presented to validate the proposed algorithm for NPSE. Results from direct numerical simulation (DNS) are regarded as the baseline for comparison. Good agreement can be found between the proposed algorithm and DNS, which indicates the great potential of the proposed method on studying the crossflow and streamwise instability in hypersonic boundary layers. (paper)

  1. Factors Associated with Marburg Hemorrhagic Fever: Analysis of Patient Data from Uige, Angola

    Science.gov (United States)

    Roddy, Paul; Thomas, Sara L.; Jeffs, Benjamin; Folo, Pascoal Nascimento; Palma, Pedro Pablo; Henrique, Bengi Moco; Villa, Luis; Machado, Fernando Paixao Damiao; Bernal, Oscar; Jones, Steven M.; Strong, James E.; Feldmann, Heinz; Borchert, Matthias

    2012-01-01

    Background Reliable on-site polymerase chain reaction (PCR) testing for Marburg hemorrhagic fever (MHF) is not always available. Therefore, clinicians triage patients on the basis of presenting symptoms and contact history. Using patient data collected in Uige, Angola, in 2005, we assessed the sensitivity and specificity of these factors to evaluate the validity of World Health Organization (WHO)–recommended case definitions for MHF. Methods Multivariable logistic regression was used to identify independent predictors of PCR confirmation of MHF. A data-derived algorithm was developed to obtain new MHF case definitions with improved sensitivity and specificity. Results A MHF case definition comprising (1) an epidemiological link or (2) the combination of myalgia or arthralgia and any hemorrhage could potentially serve as an alternative to current case definitions. Our data-derived case definitions maintained the sensitivity and improved the specificity of current WHO-recommended case definitions. Conclusions Continued efforts to improve clinical documentation during filovirus outbreaks would aid in the refinement of case definitions and facilitate outbreak control. PMID:20441515

  2. An overset algorithm for 3D unstructured grids

    International Nuclear Information System (INIS)

    Pishevar, A.R.; Shateri, A.R.

    2004-01-01

    In this paper a new methodology is introduced to simulate flows around complex geometries by using overset unstructured grids. The proposed algorithm can also be used for the unsteady flows about objects in relative motions. In such a case since the elements are not deformed during the computation the costly part of conventional methods, re-meshing, is prevented. This method relies on the inter-grid boundary definition to establish communications among independent grids in the overset system. At the end, the Euler set of equations are integrated on several overset systems to examine the capabilities of this methodology. (author)

  3. Validity of the definite and semidefinite questionnaire version of the Hamilton Depression Scale, the Hamilton subscale and the Melancholia Scale. Part I

    DEFF Research Database (Denmark)

    Hansen, Jesper Bent; Bech, Per

    2011-01-01

    , and their corresponding definite versions of the self-rating questionnaires DMQ and DHAM6 were accepted by the Rasch analysis, and only these four valid scales discriminated significantly between the effect of citalopram and placebo treatment. Our results are limited to patients with moderate depression. Two new self......-report scales with unparalleled construct validity, reliability, sensitivity, and convergent validity have been identified (DMQ and DHAM6). We have also identified a crucial importance of format for the means and variances of self-rating scales. These findings are of high practical and scientific value....

  4. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  5. A novel algorithm for a precise analysis of subchondral bone alterations

    Science.gov (United States)

    Gao, Liang; Orth, Patrick; Goebel, Lars K. H.; Cucchiarini, Magali; Madry, Henning

    2016-01-01

    Subchondral bone alterations are emerging as considerable clinical problems associated with articular cartilage repair. Their analysis exposes a pattern of variable changes, including intra-lesional osteophytes, residual microfracture holes, peri-hole bone resorption, and subchondral bone cysts. A precise distinction between them is becoming increasingly important. Here, we present a tailored algorithm based on continuous data to analyse subchondral bone changes using micro-CT images, allowing for a clear definition of each entity. We evaluated this algorithm using data sets originating from two large animal models of osteochondral repair. Intra-lesional osteophytes were detected in 3 of 10 defects in the minipig and in 4 of 5 defects in the sheep model. Peri-hole bone resorption was found in 22 of 30 microfracture holes in the minipig and in 17 of 30 microfracture holes in the sheep model. Subchondral bone cysts appeared in 1 microfracture hole in the minipig and in 5 microfracture holes in the sheep model (n = 30 holes each). Calculation of inter-rater agreement (90% agreement) and Cohen’s kappa (kappa = 0.874) revealed that the novel algorithm is highly reliable, reproducible, and valid. Comparison analysis with the best existing semi-quantitative evaluation method was also performed, supporting the enhanced precision of this algorithm. PMID:27596562

  6. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    Science.gov (United States)

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  7. A Hybrid Forecasting Model Based on Empirical Mode Decomposition and the Cuckoo Search Algorithm: A Case Study for Power Load

    Directory of Open Access Journals (Sweden)

    Jiani Heng

    2016-01-01

    Full Text Available Power load forecasting always plays a considerable role in the management of a power system, as accurate forecasting provides a guarantee for the daily operation of the power grid. It has been widely demonstrated in forecasting that hybrid forecasts can improve forecast performance compared with individual forecasts. In this paper, a hybrid forecasting approach, comprising Empirical Mode Decomposition, CSA (Cuckoo Search Algorithm, and WNN (Wavelet Neural Network, is proposed. This approach constructs a more valid forecasting structure and more stable results than traditional ANN (Artificial Neural Network models such as BPNN (Back Propagation Neural Network, GABPNN (Back Propagation Neural Network Optimized by Genetic Algorithm, and WNN. To evaluate the forecasting performance of the proposed model, a half-hourly power load in New South Wales of Australia is used as a case study in this paper. The experimental results demonstrate that the proposed hybrid model is not only simple but also able to satisfactorily approximate the actual power load and can be an effective tool in planning and dispatch for smart grids.

  8. Multithreshold Segmentation by Using an Algorithm Based on the Behavior of Locust Swarms

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2015-01-01

    Full Text Available As an alternative to classical techniques, the problem of image segmentation has also been handled through evolutionary methods. Recently, several algorithms based on evolutionary principles have been successfully applied to image segmentation with interesting performances. However, most of them maintain two important limitations: (1 they frequently obtain suboptimal results (misclassifications as a consequence of an inappropriate balance between exploration and exploitation in their search strategies; (2 the number of classes is fixed and known in advance. This paper presents an algorithm for the automatic selection of pixel classes for image segmentation. The proposed method combines a novel evolutionary method with the definition of a new objective function that appropriately evaluates the segmentation quality with respect to the number of classes. The new evolutionary algorithm, called Locust Search (LS, is based on the behavior of swarms of locusts. Different to the most of existent evolutionary algorithms, it explicitly avoids the concentration of individuals in the best positions, avoiding critical flaws such as the premature convergence to suboptimal solutions and the limited exploration-exploitation balance. Experimental tests over several benchmark functions and images validate the efficiency of the proposed technique with regard to accuracy and robustness.

  9. Optimising case detection within UK electronic health records : use of multiple linked databases for detecting liver injury

    NARCIS (Netherlands)

    Wing, Kevin; Bhaskaran, Krishnan; Smeeth, Liam; van Staa, Tjeerd P|info:eu-repo/dai/nl/304827762; Klungel, Olaf H|info:eu-repo/dai/nl/181447649; Reynolds, Robert F; Douglas, Ian

    2016-01-01

    OBJECTIVES: We aimed to create a 'multidatabase' algorithm for identification of cholestatic liver injury using multiple linked UK databases, before (1) assessing the improvement in case ascertainment compared to using a single database and (2) developing a new single-database case-definition

  10. The validity of upper-limb neurodynamic tests for detecting peripheral neuropathic pain.

    Science.gov (United States)

    Nee, Robert J; Jull, Gwendolen A; Vicenzino, Bill; Coppieters, Michel W

    2012-05-01

    The validity of upper-limb neurodynamic tests (ULNTs) for detecting peripheral neuropathic pain (PNP) was assessed by reviewing the evidence on plausibility, the definition of a positive test, reliability, and concurrent validity. Evidence was identified by a structured search for peer-reviewed articles published in English before May 2011. The quality of concurrent validity studies was assessed with the Quality Assessment of Diagnostic Accuracy Studies tool, where appropriate. Biomechanical and experimental pain data support the plausibility of ULNTs. Evidence suggests that a positive ULNT should at least partially reproduce the patient's symptoms and that structural differentiation should change these symptoms. Data indicate that this definition of a positive ULNT is reliable when used clinically. Limited evidence suggests that the median nerve test, but not the radial nerve test, helps determine whether a patient has cervical radiculopathy. The median nerve test does not help diagnose carpal tunnel syndrome. These findings should be interpreted cautiously, because diagnostic accuracy might have been distorted by the investigators' definitions of a positive ULNT. Furthermore, patients with PNP who presented with increased nerve mechanosensitivity rather than conduction loss might have been incorrectly classified by electrophysiological reference standards as not having PNP. The only evidence for concurrent validity of the ulnar nerve test was a case study on cubital tunnel syndrome. We recommend that researchers develop more comprehensive reference standards for PNP to accurately assess the concurrent validity of ULNTs and continue investigating the predictive validity of ULNTs for prognosis or treatment response.

  11. Determination of the optimal case definition for the diagnosis of end-stage renal disease from administrative claims data in Manitoba, Canada.

    Science.gov (United States)

    Komenda, Paul; Yu, Nancy; Leung, Stella; Bernstein, Keevin; Blanchard, James; Sood, Manish; Rigatto, Claudio; Tangri, Navdeep

    2015-01-01

    End-stage renal disease (ESRD) is a major public health problem with increasing prevalence and costs. An understanding of the long-term trends in dialysis rates and outcomes can help inform health policy. We determined the optimal case definition for the diagnosis of ESRD using administrative claims data in the province of Manitoba over a 7-year period. We determined the sensitivity, specificity, predictive value and overall accuracy of 4 administrative case definitions for the diagnosis of ESRD requiring chronic dialysis over different time horizons from Jan. 1, 2004, to Mar. 31, 2011. The Manitoba Renal Program Database served as the gold standard for confirming dialysis status. During the study period, 2562 patients were registered as recipients of chronic dialysis in the Manitoba Renal Program Database. Over a 1-year period (2010), the optimal case definition was any 2 claims for outpatient dialysis, and it was 74.6% sensitive (95% confidence interval [CI] 72.3%-76.9%) and 94.4% specific (95% CI 93.6%-95.2%) for the diagnosis of ESRD. In contrast, a case definition of at least 2 claims for dialysis treatment more than 90 days apart was 64.8% sensitive (95% CI 62.2%-67.3%) and 97.1% specific (95% CI 96.5%-97.7%). Extending the period to 5 years greatly improved sensitivity for all case definitions, with minimal change to specificity; for example, for the optimal case definition of any 2 claims for dialysis treatment, sensitivity increased to 86.0% (95% CI 84.7%-87.4%) at 5 years. Accurate case definitions for the diagnosis of ESRD requiring dialysis can be derived from administrative claims data. The optimal definition required any 2 claims for outpatient dialysis. Extending the claims period to 5 years greatly improved sensitivity with minimal effects on specificity for all case definitions.

  12. Validating a UAV artificial intelligence control system using an autonomous test case generator

    Science.gov (United States)

    Straub, Jeremy; Huber, Justin

    2013-05-01

    The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.

  13. Worst-case study for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagents: Methodology validation of total organic carbon

    International Nuclear Information System (INIS)

    Porto, Luciana Valeria Ferrari Machado

    2015-01-01

    Radiopharmaceuticals are defined as pharmaceutical preparations containing a radionuclide in their composition, mostly intravenously administered, and therefore compliance with the principles of Good Manufacturing Practices (GMP) is essential and indispensable. Cleaning validation is a requirement of the current GMP, and consists of documented evidence, which demonstrates that the cleaning procedures are able to remove residues to pre-determined acceptance levels, ensuring that no cross contamination occurs. A simplification of cleaning processes validation is accepted, and consists in choosing a product, called 'worst case', to represent the cleaning processes of all equipment of the same production area. One of the steps of cleaning validation is the establishment and validation of the analytical method to quantify the residue. The aim of this study was to establish the worst case for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagent (LR) for labeling with 99m Tc, evaluate the use of Total Organic Carbon (TOC) content as indicator of equipment cleaning used in the LR manufacture, validate the method of Non-Purgeable Organic Carbon (NPOC), and perform recovery tests with the product chosen as worst case. Worst case product's choice was based on the calculation of an index called 'Worst Case Index' (WCI), using information about drug solubility, difficulty of cleaning the equipment and occupancy rate of the products in line production. The products indicated 'worst case' was the LR MIBI-TEC. The method validation assays were performed using carbon analyser model TOC-Vwp coupled to an autosampler model ASI-V, both from Shimadzu®, controlled by TOC Control-V software. It was used the direct method for NPOC quantification. The parameters evaluated in the validation method were: system suitability, robustness, linearity, detection limit (DL) and quantification limit (QL), precision

  14. On Federated and Proof Of Validation Based Consensus Algorithms In Blockchain

    Science.gov (United States)

    Ambili, K. N.; Sindhu, M.; Sethumadhavan, M.

    2017-08-01

    Almost all real world activities have been digitized and there are various client server architecture based systems in place to handle them. These are all based on trust on third parties. There is an active attempt to successfully implement blockchain based systems which ensures that the IT systems are immutable, double spending is avoided and cryptographic strength is provided to them. A successful implementation of blockchain as backbone of existing information technology systems is bound to eliminate various types of fraud and ensure quicker delivery of the item on trade. To adapt IT systems to blockchain architecture, an efficient consensus algorithm need to be designed. Blockchain based on proof of work first came up as the backbone of cryptocurrency. After this, several other methods with variety of interesting features have come up. In this paper, we conduct a survey on existing attempts to achieve consensus in block chain. A federated consensus method and a proof of validation method are being compared.

  15. Validation of algorithm used for location of electrodes in CT images

    International Nuclear Information System (INIS)

    Bustos, J; Graffigna, J P; Isoardi, R; Gómez, M E; Romo, R

    2013-01-01

    It has been implement a noninvasive technique to detect and delineate the focus of electric discharge in patients with mono-focal epilepsy. For the detection of these sources it has used electroencephalogram (EEG) with 128 electrodes cap. With EEG data and electrodes position, it is possible locate this focus on MR volumes. The technique locates the electrodes on CT volumes using image processing algorithms to obtain descriptors of electrodes, as centroid, which determines its position in space. Finally these points are transformed into the coordinate space of MR through a registration for a better understanding by the physician. Due to the medical implications of this technique is of utmost importance to validate the results of the detection of electrodes coordinates. For that, this paper present a comparison between the actual values measured physically (measures including electrode size and spatial location) and the values obtained in the processing of CT and MR images

  16. Case definition for clinical and subclinical bacterial kidney disease (BKD) in Atlantic Salmon (Salmo salar L.) in New Brunswick, Canada.

    Science.gov (United States)

    Boerlage, A S; Stryhn, H; Sanchez, J; Hammell, K L

    2017-03-01

    Bacterial kidney disease (BKD) is considered an important cause of loss in salmon aquaculture in Atlantic Canada. Causative agent of BKD is the Gram-positive bacteria Renibacterium salmoninarum. Infected salmon are often asymptomatic (subclinical infection), and the disease is considered chronic. One of the challenges in quantifying information from farm production and health records is the application of a standardized case definition. Case definitions for farm-level and cage-level clinical and subclinical BKD were developed using retrospective longitudinal data from aquaculture practices in New Brunswick, Canada, combining (i) industry records of weekly production data including mortalities, (ii) field observations for BKD using reports of veterinarians and/or fish health technicians, (iii) diagnostic submissions and test results and (iv) treatments used to control BKD. Case definitions were evaluated using veterinarians' expert judgements as reference standard. Eighty-nine and 66% of sites and fish groups, respectively, were associated with BKD at least once. For BKD present (subclinical or clinical), sensitivity and specificity of the case definition were 75-100% varying between event, fish group, site cycle and level (site pen). For clinical BKD, sensitivities were 29-64% and specificities 91-100%. Industry data can be used to develop sensitive case definitions. © 2016 John Wiley & Sons Ltd.

  17. Natural history of benign prostatic hyperplasia: Appropriate case definition and estimation of its prevalence in the community

    OpenAIRE

    Bosch, Ruud; Hop, Wim; Kirkels, Wim; Schröder, Fritz

    1995-01-01

    textabstractThere is no consensus about a case definition of benign prostatic hyperplasia (BPH). In the present study, BPH prevalence rates were determined using various case definitions based on a combination of clinical parameters used to describe the properties of BPH: symptoms of prostatism, prostate volume increase, and bladder outflow obstruction. The aim of this study—in a community-based population of 502 men (55–74 years of age) without prostate cancer—was to determine the relative i...

  18. Pinning impulsive control algorithms for complex network

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Wen [School of Information and Mathematics, Yangtze University, Jingzhou 434023 (China); Lü, Jinhu [Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); Chen, Shihua [College of Mathematics and Statistics, Wuhan University, Wuhan 430072 (China); Yu, Xinghuo [School of Electrical and Computer Engineering, RMIT University, Melbourne VIC 3001 (Australia)

    2014-03-15

    In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.

  19. Pinning impulsive control algorithms for complex network

    International Nuclear Information System (INIS)

    Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo

    2014-01-01

    In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms

  20. Mining Product Data Models: A Case Study

    Directory of Open Access Journals (Sweden)

    Cristina-Claudia DOLEAN

    2014-01-01

    Full Text Available This paper presents two case studies used to prove the validity of some data-flow mining algorithms. We proposed the data-flow mining algorithms because most part of mining algorithms focuses on the control-flow perspective. First case study uses event logs generated by an ERP system (Navision after we set several trackers on the data elements needed in the process analyzed; while the second case study uses the event logs generated by YAWL system. We offered a general solution of data-flow model extraction from different data sources. In order to apply the data-flow mining algorithms the event logs must comply a certain format (using InputOutput extension. But to respect this format, a set of conversion tools is needed. We depicted the conversion tools used and how we got the data-flow models. Moreover, the data-flow model is compared to the control-flow model.

  1. FPGA implementation of image dehazing algorithm for real time applications

    Science.gov (United States)

    Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.

    2017-09-01

    Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.

  2. Improving Coastal Ocean Color Validation Capabilities through Application of Inherent Optical Properties (IOPs)

    Science.gov (United States)

    Mannino, Antonio

    2008-01-01

    Understanding how the different components of seawater alter the path of incident sunlight through scattering and absorption is essential to using remotely sensed ocean color observations effectively. This is particularly apropos in coastal waters where the different optically significant components (phytoplankton, detrital material, inorganic minerals, etc.) vary widely in concentration, often independently from one another. Inherent Optical Properties (IOPs) form the link between these biogeochemical constituents and the Apparent Optical Properties (AOPs). understanding this interrelationship is at the heart of successfully carrying out inversions of satellite-measured radiance to biogeochemical properties. While sufficient covariation of seawater constituents in case I waters typically allows empirical algorithms connecting AOPs and biogeochemical parameters to behave well, these empirical algorithms normally do not hold for case I1 regimes (Carder et al. 2003). Validation in the context of ocean color remote sensing refers to in-situ measurements used to verify or characterize algorithm products or any assumption used as input to an algorithm. In this project, validation capabilities are considered those measurement capabilities, techniques, methods, models, etc. that allow effective validation. Enhancing current validation capabilities by incorporating state-of-the-art IOP measurements and optical models is the purpose of this work. Involved in this pursuit is improving core IOP measurement capabilities (spectral, angular, spatio-temporal resolutions), improving our understanding of the behavior of analytical AOP-IOP approximations in complex coastal waters, and improving the spatial and temporal resolution of biogeochemical data for validation by applying biogeochemical-IOP inversion models so that these parameters can be computed from real-time IOP sensors with high sampling rates. Research cruises supported by this project provides for collection and

  3. Partial Remission Definition

    DEFF Research Database (Denmark)

    Andersen, Marie Louise Max; Hougaard, Philip; Pörksen, Sven

    2014-01-01

    OBJECTIVE: To validate the partial remission (PR) definition based on insulin dose-adjusted HbA1c (IDAA1c). SUBJECTS AND METHODS: The IDAA1c was developed using data in 251 children from the European Hvidoere cohort. For validation, 129 children from a Danish cohort were followed from the onset...

  4. Inversion algorithms for the spherical Radon and cosine transform

    International Nuclear Information System (INIS)

    Louis, A K; Riplinger, M; Spiess, M; Spodarev, E

    2011-01-01

    We consider two integral transforms which are frequently used in integral geometry and related fields, namely the spherical Radon and cosine transform. Fast algorithms are developed which invert the respective transforms in a numerically stable way. So far, only theoretical inversion formulae or algorithms for atomic measures have been derived, which are not so important for applications. We focus on two- and three-dimensional cases, where we also show that our method leads to a regularization. Numerical results are presented and show the validity of the resulting algorithms. First, we use synthetic data for the inversion of the Radon transform. Then we apply the algorithm for the inversion of the cosine transform to reconstruct the directional distribution of line processes from finitely many intersections of their lines with test lines (2D) or planes (3D), respectively. Finally we apply our method to analyse a series of microscopic two- and three-dimensional images of a fibre system

  5. Terminology, Emphasis, and Utility in Validation

    Science.gov (United States)

    Kane, Michael T.

    2008-01-01

    Lissitz and Samuelsen (2007) have proposed an operational definition of "validity" that shifts many of the questions traditionally considered under validity to a separate category associated with the utility of test use. Operational definitions support inferences about how well people perform some kind of task or how they respond to some kind of…

  6. [Using cancer case identification algorithms in medico-administrative databases: Literature review and first results from the REDSIAM Tumors group based on breast, colon, and lung cancer].

    Science.gov (United States)

    Bousquet, P-J; Caillet, P; Coeuret-Pellicer, M; Goulard, H; Kudjawu, Y C; Le Bihan, C; Lecuyer, A I; Séguret, F

    2017-10-01

    The development and use of healthcare databases accentuates the need for dedicated tools, including validated selection algorithms of cancer diseased patients. As part of the development of the French National Health Insurance System data network REDSIAM, the tumor taskforce established an inventory of national and internal published algorithms in the field of cancer. This work aims to facilitate the choice of a best-suited algorithm. A non-systematic literature search was conducted for various cancers. Results are presented for lung, breast, colon, and rectum. Medline, Scopus, the French Database in Public Health, Google Scholar, and the summaries of the main French journals in oncology and public health were searched for publications until August 2016. An extraction grid adapted to oncology was constructed and used for the extraction process. A total of 18 publications were selected for lung cancer, 18 for breast cancer, and 12 for colorectal cancer. Validation studies of algorithms are scarce. When information is available, the performance and choice of an algorithm are dependent on the context, purpose, and location of the planned study. Accounting for cancer disease specificity, the proposed extraction chart is more detailed than the generic chart developed for other REDSIAM taskforces, but remains easily usable in practice. This study illustrates the complexity of cancer detection through sole reliance on healthcare databases and the lack of validated algorithms specifically designed for this purpose. Studies that standardize and facilitate validation of these algorithms should be developed and promoted. Copyright © 2017. Published by Elsevier Masson SAS.

  7. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH

  8. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    International Nuclear Information System (INIS)

    Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B

    2016-01-01

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH

  9. Derivation and Validation of a Biomarker-Based Clinical Algorithm to Rule Out Sepsis From Noninfectious Systemic Inflammatory Response Syndrome at Emergency Department Admission: A Multicenter Prospective Study.

    Science.gov (United States)

    Mearelli, Filippo; Fiotti, Nicola; Giansante, Carlo; Casarsa, Chiara; Orso, Daniele; De Helmersen, Marco; Altamura, Nicola; Ruscio, Maurizio; Castello, Luigi Mario; Colonetti, Efrem; Marino, Rossella; Barbati, Giulia; Bregnocchi, Andrea; Ronco, Claudio; Lupia, Enrico; Montrucchio, Giuseppe; Muiesan, Maria Lorenza; Di Somma, Salvatore; Avanzi, Gian Carlo; Biolo, Gianni

    2018-05-07

    To derive and validate a predictive algorithm integrating a nomogram-based prediction of the pretest probability of infection with a panel of serum biomarkers, which could robustly differentiate sepsis/septic shock from noninfectious systemic inflammatory response syndrome. Multicenter prospective study. At emergency department admission in five University hospitals. Nine-hundred forty-seven adults in inception cohort and 185 adults in validation cohort. None. A nomogram, including age, Sequential Organ Failure Assessment score, recent antimicrobial therapy, hyperthermia, leukocytosis, and high C-reactive protein values, was built in order to take data from 716 infected patients and 120 patients with noninfectious systemic inflammatory response syndrome to predict pretest probability of infection. Then, the best combination of procalcitonin, soluble phospholypase A2 group IIA, presepsin, soluble interleukin-2 receptor α, and soluble triggering receptor expressed on myeloid cell-1 was applied in order to categorize patients as "likely" or "unlikely" to be infected. The predictive algorithm required only procalcitonin backed up with soluble phospholypase A2 group IIA determined in 29% of the patients to rule out sepsis/septic shock with a negative predictive value of 93%. In a validation cohort of 158 patients, predictive algorithm reached 100% of negative predictive value requiring biomarker measurements in 18% of the population. We have developed and validated a high-performing, reproducible, and parsimonious algorithm to assist emergency department physicians in distinguishing sepsis/septic shock from noninfectious systemic inflammatory response syndrome.

  10. A Novel Geo-Broadcast Algorithm for V2V Communications over WSN

    Directory of Open Access Journals (Sweden)

    José J. Anaya

    2014-08-01

    Full Text Available The key for enabling the next generation of advanced driver assistance systems (ADAS, the cooperative systems, is the availability of vehicular communication technologies, whose mandatory installation in cars is foreseen in the next few years. The definition of the communications is in the final step of development, with great efforts on standardization and some field operational tests of network devices and applications. However, some inter-vehicular communications issues are not sufficiently developed and are the target of research. One of these challenges is the construction of stable networks based on the position of the nodes of the vehicular network, as well as the broadcast of information destined to nodes concentrated in a specific geographic area without collapsing the network. In this paper, a novel algorithm for geo-broadcast communications is presented, based on the evolution of previous results in vehicular mesh networks using wireless sensor networks with IEEE 802.15.4 technology. This algorithm has been designed and compared with the IEEE 802.11p algorithms, implemented and validated in controlled conditions and tested on real vehicles. The results suggest that the characteristics of the designed broadcast algorithm can improve any vehicular communications architecture to complement a geo-networking functionality that supports a variety of ADAS.

  11. Competing definitions of schizophrenia: what can be learned from polydiagnostic studies?

    DEFF Research Database (Denmark)

    Jansson, Lennart B; Parnas, Josef

    2007-01-01

    of explicit conceptual analyses and empirical studies but defined through consensus with the purpose of improving reliability. The validity status of current definitions and of their predecessors remains unclear. The so-called "polydiagnostic approach" applies different definitions of a disorder to the same...... patient sample in order to compare these definitions on potential validity indicators. We reviewed 92 polydiagnostic sz studies published since the early 1970s. Different sz definitions show a considerable variation concerning frequency, concordance, reliability, outcome, and other validity measures....... The DSM-IV and the ICD-10 show moderate reliability but both definitions appear weak in terms of concurrent validity, eg, with respect to an aggregation of a priori important features. The first-rank symptoms of Schneider are not associated with family history of sz or with prediction of poor outcome...

  12. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  13. Course and Outcome of Bacteremia Due to Staphylococcus Aureus: Evaluation of Different Clinical Case Definitions

    NARCIS (Netherlands)

    S. Lautenschlager (Stephan); C. Herzog (Christian); W. Zimmerli (Werner)

    1993-01-01

    textabstractIn a retrospective survey of patients hospitalized in the University Hospital of Basel, Switzerland, the course and outcome of 281 cases of true bacteremia due to Staphylococcus aureus over a 7-year period were analyzed. The main purpose was to evaluate different case definitions. In 78%

  14. Validation of Biomarkers for Prostate Cancer Prognosis

    Science.gov (United States)

    2016-11-01

    subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE...challenge, we formed the multi-institutional Canary Tissue Microarray Project. We have used rigorous clinical trial case/cohort design, taking care to...concluded that TACOMA algorithm as it currently stands, it is inadequate for automatic imaging reading. The main reason is that it still requires

  15. Recursive parameter estimation for Hammerstein-Wiener systems using modified EKF algorithm.

    Science.gov (United States)

    Yu, Feng; Mao, Zhizhong; Yuan, Ping; He, Dakuo; Jia, Mingxing

    2017-09-01

    This paper focuses on the recursive parameter estimation for the single input single output Hammerstein-Wiener system model, and the study is then extended to a rarely mentioned multiple input single output Hammerstein-Wiener system. Inspired by the extended Kalman filter algorithm, two basic recursive algorithms are derived from the first and the second order Taylor approximation. Based on the form of the first order approximation algorithm, a modified algorithm with larger parameter convergence domain is proposed to cope with the problem of small parameter convergence domain of the first order one and the application limit of the second order one. The validity of the modification on the expansion of convergence domain is shown from the convergence analysis and is demonstrated with two simulation cases. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  17. An Extensible Component-Based Multi-Objective Evolutionary Algorithm Framework

    DEFF Research Database (Denmark)

    Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2017-01-01

    The ability to easily modify the problem definition is currently missing in Multi-Objective Evolutionary Algorithms (MOEA). Existing MOEA frameworks do not support dynamic addition and extension of the problem formulation. The existing frameworks require a re-specification of the problem definition...

  18. Generic project definitions for improvement of health care delivery: A case-base approach

    NARCIS (Netherlands)

    Niemeijer, G.C.; Does, R.J.M.M.; de Mast, J.; Trip, A.; van den Heuvel, J.

    2011-01-01

    Background: The purpose of this article is to create actionable knowledge, making the definition of process improvement projects in health care delivery more effective. Methods: This study is a retrospective analysis of process improvement projects in hospitals, facilitating a case-based reasoning

  19. Improved algorithm for solving nonlinear parabolized stability equations

    Science.gov (United States)

    Zhao, Lei; Zhang, Cun-bo; Liu, Jian-xin; Luo, Ji-sheng

    2016-08-01

    Due to its high computational efficiency and ability to consider nonparallel and nonlinear effects, nonlinear parabolized stability equations (NPSE) approach has been widely used to study the stability and transition mechanisms. However, it often diverges in hypersonic boundary layers when the amplitude of disturbance reaches a certain level. In this study, an improved algorithm for solving NPSE is developed. In this algorithm, the mean flow distortion is included into the linear operator instead of into the nonlinear forcing terms in NPSE. An under-relaxation factor for computing the nonlinear terms is introduced during the iteration process to guarantee the robustness of the algorithm. Two case studies, the nonlinear development of stationary crossflow vortices and the fundamental resonance of the second mode disturbance in hypersonic boundary layers, are presented to validate the proposed algorithm for NPSE. Results from direct numerical simulation (DNS) are regarded as the baseline for comparison. Good agreement can be found between the proposed algorithm and DNS, which indicates the great potential of the proposed method on studying the crossflow and streamwise instability in hypersonic boundary layers. Project supported by the National Natural Science Foundation of China (Grant Nos. 11332007 and 11402167).

  20. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    Science.gov (United States)

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Zero-G experimental validation of a robotics-based inertia identification algorithm

    Science.gov (United States)

    Bruggemann, Jeremy J.; Ferrel, Ivann; Martinez, Gerardo; Xie, Pu; Ma, Ou

    2010-04-01

    The need to efficiently identify the changing inertial properties of on-orbit spacecraft is becoming more critical as satellite on-orbit services, such as refueling and repairing, become increasingly aggressive and complex. This need stems from the fact that a spacecraft's control system relies on the knowledge of the spacecraft's inertia parameters. However, the inertia parameters may change during flight for reasons such as fuel usage, payload deployment or retrieval, and docking/capturing operations. New Mexico State University's Dynamics, Controls, and Robotics Research Group has proposed a robotics-based method of identifying unknown spacecraft inertia properties1. Previous methods require firing known thrusts then measuring the thrust, and the velocity and acceleration changes. The new method utilizes the concept of momentum conservation, while employing a robotic device powered by renewable energy to excite the state of the satellite. Thus, it requires no fuel usage or force and acceleration measurements. The method has been well studied in theory and demonstrated by simulation. However its experimental validation is challenging because a 6- degree-of-freedom motion in a zero-gravity condition is required. This paper presents an on-going effort to test the inertia identification method onboard the NASA zero-G aircraft. The design and capability of the test unit will be discussed in addition to the flight data. This paper also introduces the design and development of an airbearing based test used to partially validate the method, in addition to the approach used to obtain reference value for the test system's inertia parameters that can be used for comparison with the algorithm results.

  2. Self-management interventions: Proposal and validation of a new operational definition.

    Science.gov (United States)

    Jonkman, Nini H; Schuurmans, Marieke J; Jaarsma, Tiny; Shortridge-Baggett, Lillie M; Hoes, Arno W; Trappenburg, Jaap C A

    2016-12-01

    Systematic reviews on complex interventions like self-management interventions often do not explicitly state an operational definition of the intervention studied, which may impact the review's conclusions. This study aimed to propose an operational definition of self-management interventions and determine its discriminative performance compared with other operational definitions. Systematic review of definitions of self-management interventions and consensus meetings with self-management research experts and practitioners. Self-management interventions were defined as interventions that aim to equip patients with skills to actively participate and take responsibility in the management of their chronic condition in order to function optimally through at least knowledge acquisition and a combination of at least two of the following: stimulation of independent sign/symptom monitoring, medication management, enhancing problem-solving and decision-making skills for medical treatment management, and changing their physical activity, dietary, and/or smoking behavior. This definition substantially reduced the number of selected studies (255 of 750). In two preliminary expert meetings (n = 6), the proposed definition was identifiable for self-management research experts and practitioners (80% and 60% agreement, respectively). Future systematic reviews must carefully consider the operational definition of the intervention studied because the definition influences the selection of studies on which conclusions and recommendations for clinical practice are based. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Competing Definitions of Schizophrenia: What Can Be Learned From Polydiagnostic Studies?

    DEFF Research Database (Denmark)

    Jansson, Lennart Bertil; Parnas, Josef

    2006-01-01

    not a product of explicit conceptual analyses and empirical studies but defined through consensus with the purpose of improving reliability. The validity status of current definitions and of their predecessors remains unclear. The so-called "polydiagnostic approach" applies different definitions of a disorder...... to the same patient sample in order to compare these definitions on potential validity indicators. We reviewed 92 polydiagnostic sz studies published since the early 1970s. Different sz definitions show a considerable variation concerning frequency, concordance, reliability, outcome, and other validity...... measures. The DSM-IV and the ICD-10 show moderate reliability but both definitions appear weak in terms of concurrent validity, eg, with respect to an aggregation of a priori important features. The first-rank symptoms of Schneider are not associated with family history of sz or with prediction of poor...

  4. An evaluation of modified case definitions for the detection of dengue hemorrhagic fever. Puerto Rico Association of Epidemiologists.

    Science.gov (United States)

    Rigau-Pérez, J G; Bonilla, G L

    1999-12-01

    The case definition for dengue hemorrhagic fever (DHF) requires fever, platelets plasma leakage evidenced by hemoconcentration > or = 20%, pleural or abdominal effusions, hypoproteinemia or hypoalbuminemia. We evaluated the specificity and yield of modified DHF case definitions and the recently proposed World Health Organization criteria for a provisional diagnosis of DHF, using a data base of laboratory-positive and laboratory-negative reports of hospitalizations for suspected dengue in Puerto Rico, 1994 to 1996. By design, all modifications had 100% sensitivity. More liberal criteria for plasma leakage were examined: 1) adding as evidence a single hematocrit > or = 50% (specificity 97.4%); 2) accepting hemoconcentration > or = 10% (specificity 90.1%); and 3) accepting either hematocrit > or = 50% or hemoconcentration > or = 10% (specificity 88.8%). The new DHF cases identified by these definitions (and percent laboratory positive) were 25 (100.0%), 95 (90.5%), and 107 (91.6%), respectively. In contrast, the provisional diagnosis of DHF (fever and hemorrhage, and one or more of platelets or = 20%, or at least a rising hematocrit [redefined quantitatively as a 5% or greater relative change]) showed a specificity of 66.8%, and identified 318 new DHF cases, of which 282 (88.7%) were laboratory-positive. Very small changes in the criteria may result in a large number of new cases. The modification that accepted either hematocrit > or = 50% or hemoconcentration > or = 10% had acceptable specificity, while doubling the detection of DHF-compatible, laboratory-positive severe cases, but "provisional diagnosis" showed even lower specificity, and may produce inflated DHF incidence figures. Modified case definitions should be prospectively evaluated with patients in a health-care facility before they are recommended for widespread use.

  5. Application of information retrieval approaches to case classification in the vaccine adverse event reporting system.

    Science.gov (United States)

    Botsis, Taxiarchis; Woo, Emily Jane; Ball, Robert

    2013-07-01

    Automating the classification of adverse event reports is an important step to improve the efficiency of vaccine safety surveillance. Previously we showed it was possible to classify reports using features extracted from the text of the reports. The aim of this study was to use the information encoded in the Medical Dictionary for Regulatory Activities (MedDRA(®)) in the US Vaccine Adverse Event Reporting System (VAERS) to support and evaluate two classification approaches: a multiple information retrieval strategy and a rule-based approach. To evaluate the performance of these approaches, we selected the conditions of anaphylaxis and Guillain-Barré syndrome (GBS). We used MedDRA(®) Preferred Terms stored in the VAERS, and two standardized medical terminologies: the Brighton Collaboration (BC) case definitions and Standardized MedDRA(®) Queries (SMQ) to classify two sets of reports for GBS and anaphylaxis. Two approaches were used: (i) the rule-based instruments that are available by the two terminologies (the Automatic Brighton Classification [ABC] tool and the SMQ algorithms); and (ii) the vector space model. We found that the rule-based instruments, particularly the SMQ algorithms, achieved a high degree of specificity; however, there was a cost in terms of sensitivity in all but the narrow GBS SMQ algorithm that outperformed the remaining approaches (sensitivity in the testing set was equal to 99.06 % for this algorithm vs. 93.40 % for the vector space model). In the case of anaphylaxis, the vector space model achieved higher sensitivity compared with the best values of both the ABC tool and the SMQ algorithms in the testing set (86.44 % vs. 64.11 % and 52.54 %, respectively). Our results showed the superiority of the vector space model over the existing rule-based approaches irrespective of the standardized medical knowledge represented by either the SMQ or the BC case definition. The vector space model might make automation of case definitions for

  6. A comparative study and validation of state estimation algorithms for Li-ion batteries in battery management systems

    International Nuclear Information System (INIS)

    Klee Barillas, Joaquín; Li, Jiahao; Günther, Clemens; Danzer, Michael A.

    2015-01-01

    Highlights: • Description of state observers for estimating the battery’s SOC. • Implementation of four estimation algorithms in a BMS. • Reliability and performance study of BMS regarding the estimation algorithms. • Analysis of the robustness and code properties of the estimation approaches. • Guide to evaluate estimation algorithms to improve the BMS performance. - Abstract: To increase lifetime, safety, and energy usage battery management systems (BMS) for Li-ion batteries have to be capable of estimating the state of charge (SOC) of the battery cells with a very low estimation error. The accurate SOC estimation and the real time reliability are critical issues for a BMS. In general an increasing complexity of the estimation methods leads to higher accuracy. On the other hand it also leads to a higher computational load and may exceed the BMS limitations or increase its costs. An approach to evaluate and verify estimation algorithms is presented as a requisite prior the release of the battery system. The approach consists of an analysis concerning the SOC estimation accuracy, the code properties, complexity, the computation time, and the memory usage. Furthermore, a study for estimation methods is proposed for their evaluation and validation with respect to convergence behavior, parameter sensitivity, initialization error, and performance. In this work, the introduced analysis is demonstrated with four of the most published model-based estimation algorithms including Luenberger observer, sliding-mode observer, Extended Kalman Filter and Sigma-point Kalman Filter. The experiments under dynamic current conditions are used to verify the real time functionality of the BMS. The results show that a simple estimation method like the sliding-mode observer can compete with the Kalman-based methods presenting less computational time and memory usage. Depending on the battery system’s application the estimation algorithm has to be selected to fulfill the

  7. The Diagnosis of Urinary Tract infection in Young children (DUTY): a diagnostic prospective observational study to derive and validate a clinical algorithm for the diagnosis of urinary tract infection in children presenting to primary care with an acute illness.

    Science.gov (United States)

    Hay, Alastair D; Birnie, Kate; Busby, John; Delaney, Brendan; Downing, Harriet; Dudley, Jan; Durbaba, Stevo; Fletcher, Margaret; Harman, Kim; Hollingworth, William; Hood, Kerenza; Howe, Robin; Lawton, Michael; Lisles, Catherine; Little, Paul; MacGowan, Alasdair; O'Brien, Kathryn; Pickles, Timothy; Rumsby, Kate; Sterne, Jonathan Ac; Thomas-Jones, Emma; van der Voort, Judith; Waldron, Cherry-Ann; Whiting, Penny; Wootton, Mandy; Butler, Christopher C

    2016-07-01

    It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. Multicentre, prospective diagnostic cohort study. Children UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. A total of 7163 children were recruited, of whom 50% were female and 49% were children provided clean-catch samples, 94% of whom were ≥ 2 years old, with 2.2% meeting the UTI definition. Among these, 'clinical diagnosis' correctly identified 46.6% of positive cultures, with 94.7% specificity and an AUROC of 0.77 (95% CI 0.71 to 0.83). Four symptoms, three signs and three dipstick results were independently associated with UTI with an AUROC (95% CI; bootstrap-validated AUROC) of 0.89 (0.85 to 0.95; validated 0.88) for symptoms and signs, increasing to 0.93 (0.90 to 0.97; validated 0.90) with dipstick

  8. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  9. GPM GROUND VALIDATION GCPEX SNOW MICROPHYSICS CASE STUDY V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation GCPEX Snow Microphysics Case Study characterizes the 3-D microphysical evolution and distribution of snow in context of the thermodynamic...

  10. Severe Chronic Upper Airway Disease (SCUAD) in children. Definition issues and requirements.

    Science.gov (United States)

    Karatzanis, A; Kalogjera, L; Scadding, G; Velegrakis, S; Kawauchi, H; Cingi, C; Prokopakis, E

    2015-07-01

    Upper airway diseases are extremely common, and a significant proportion of patients are not adequately controlled by contemporary treatment algorithms. The term SCUAD (Severe Chronic Upper Airway Disease) has been previously introduced to describe such cases. However, this term has not been adequately focused on children. This study aims to address the necessity of the term, as well as further details specifically for children. For this purpose, a review was performed of the current literature, with specific focus on issues regarding SCUAD in children. Paediatric SCUAD represents a heterogeneous group of patients and has significant clinical and socioeconomic implications. Relevant literature is generally lacking and questions regarding definition and pathogenesis remain unanswered. Accurate definition and acknowledgement of paediatric SCUAD cases may lead to better design of future clinical and molecular research protocols. This may provide improved understanding of the underlying disease processes, more accurate data regarding socioeconomic burden, and, above all, more successful treatment and prevention strategies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Validating archetypes for the Multiple Sclerosis Functional Composite.

    Science.gov (United States)

    Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin

    2014-08-03

    Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes

  12. Upwind algorithm for the parabolized Navier-Stokes equations

    Science.gov (United States)

    Lawrence, Scott L.; Tannehill, John C.; Chausee, Denny S.

    1989-01-01

    A new upwind algorithm based on Roe's scheme has been developed to solve the two-dimensional parabolized Navier-Stokes equations. This method does not require the addition of user-specified smoothing terms for the capture of discontinuities such as shock waves. Thus, the method is easy to use and can be applied without modification to a wide variety of supersonic flowfields. The advantages and disadvantages of this adaptation are discussed in relation to those of the conventional Beam-Warming (1978) scheme in terms of accuracy, stability, computer time and storage requirements, and programming effort. The new algorithm has been validated by applying it to three laminar test cases, including flat-plate boundary-layer flow, hypersonic flow past a 15-deg compression corner, and hypersonic flow into a converging inlet. The computed results compare well with experiment and show a dramatic improvement in the resolution of flowfield details when compared with results obtained using the conventional Beam-Warming algorithm.

  13. Development of a case definition for clinical feline herpesvirus infection in cheetahs (Acinonyx jubatus) housed in zoos.

    Science.gov (United States)

    Witte, Carmel L; Lamberski, Nadine; Rideout, Bruce A; Fields, Victoria; Teare, Cyd Shields; Barrie, Michael; Haefele, Holly; Junge, Randall; Murray, Suzan; Hungerford, Laura L

    2013-09-01

    The identification of feline herpesvirus (FHV) infected cheetahs (Acinonyx jubatus) and characterization of shedding episodes is difficult due to nonspecific clinical signs and limitations of diagnostic tests. The goals of this study were to develop a case definition for clinical FHV and describe the distribution of signs. Medical records from six different zoologic institutions were reviewed to identify cheetahs with diagnostic test results confirming FHV. Published literature, expert opinion, and results of a multiple correspondence analysis (MCA) were used to develop a clinical case definition based on 69 episodes in FHV laboratory confirmed (LC) cheetahs. Four groups of signs were identified in the MCA: general ocular signs, serious ocular lesions, respiratory disease, and cutaneous lesions. Ocular disease occurred with respiratory signs alone, with skin lesions alone, and with both respiratory signs and skin lesions. Groups that did not occur together were respiratory signs and skin lesions. The resulting case definition included 1) LC cheetahs; and 2) clinically compatible (CC) cheetahs that exhibited a minimum of 7 day's duration of the clinical sign groupings identified in the MCA or the presence of corneal ulcers or keratitis that occurred alone or in concert with other ocular signs and skin lesions. Exclusion criteria were applied. Application of the case definition to the study population identified an additional 78 clinical episodes, which represented 58 CC cheetahs. In total, 28.8% (93/322) of the population was identified as LC or CC. The distribution of identified clinical signs was similar across LC and CC cheetahs. Corneal ulcers and/or keratitis, and skin lesions were more frequently reported in severe episodes; in mild episodes, there were significantly more cheetahs with ocular-only or respiratory-only disease. Our results provide a better understanding of the clinical presentation of FHV, while presenting a standardized case definition that can

  14. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  15. Sputum, sex and scanty smears: new case definition may reduce sex disparities in smear-positive tuberculosis.

    Science.gov (United States)

    Ramsay, A; Bonnet, M; Gagnidze, L; Githui, W; Varaine, F; Guérin, P J

    2009-05-01

    Urban clinic, Nairobi. To evaluate the impact of specimen quality and different smear-positive tuberculosis (TB) case (SPC) definitions on SPC detection by sex. Prospective study among TB suspects. A total of 695 patients were recruited: 644 produced > or =1 specimen for microscopy. The male/female sex ratio was 0.8. There were no significant differences in numbers of men and women submitting three specimens (274/314 vs. 339/380, P = 0.43). Significantly more men than women produced a set of three 'good' quality specimens (175/274 vs. 182/339, P = 0.01). Lowering thresholds for definitions to include scanty smears resulted in increases in SPC detection in both sexes; the increase was significantly higher for women. The revised World Health Organization (WHO) case definition was associated with the highest detection rates in women. When analysis was restricted only to patients submitting 'good' quality specimen sets, the difference in detection between sexes was on the threshold for significance (P = 0.05). Higher SPC notification rates in men are commonly reported by TB control programmes. The revised WHO SPC definition may reduce sex disparities in notification. This should be considered when evaluating other interventions aimed at reducing these. Further study is required on the effects of the human immuno-deficiency virus and instructed specimen collection on sex-specific impact of new SPC definition.

  16. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  17. Merits and limitations of the mode switching rate stabilization pacing algorithms in the implantable cardioverter defibrillator.

    Science.gov (United States)

    Dijkman, B; Wellens, H J

    2001-09-01

    The 7250 Jewel AF Medtronic model of ICD is the first implantable device in which both therapies for atrial arrhythmias and pacing algorithms for atrial arrhythmia prevention are available. Feasibility of that extensive atrial arrhythmia management requires correct and synergic functioning of different algorithms to control arrhythmias. The ability of the new pacing algorithms to stabilize the atrial rate following termination of treated atrial arrhythmias was evaluated in the marker channel registration of 600 spontaneously occurring episodes in 15 patients with the Jewel AF. All patients (55+/-15 years) had structural heart disease and documented atrial and ventricular arrhythmias. Dual chamber rate stabilization pacing was present in 245 (41 %) of episodes following arrhythmia termination and was a part of the mode switching operation during which pacing was provided in the dynamic DDI mode. This algorithm could function as the atrial rate stabilization pacing only when there was a slow spontaneous atrial rhythm or in presence of atrial premature beats conducted to the ventricles with a normal AV time. In case of atrial premature beats with delayed or absent conduction to the ventricles and in case of ventricular premature beats, the algorithm stabilized the ventricular rate. The rate stabilization pacing in DDI mode during sinus rhythm following atrial arrhythmia termination was often extended in time due to the device-based definition of arrhythmia termination. This was also the case in patients, in whom the DDD mode with true atrial rate stabilization algorithm was programmed. The rate stabilization algorithms in the Jewel AF applied after atrial arrhythmia termination provide pacing that is not based on the timing of atrial events. Only under certain circumstances the algorithm can function as atrial rate stabilization pacing. Adjustments in availability and functioning of the rate stabilization algorithms might be of benefit for the clinical performance of

  18. Improved numerical algorithm and experimental validation of a system thermal-hydraulic/CFD coupling method for multi-scale transient simulations of pool-type reactors

    International Nuclear Information System (INIS)

    Toti, A.; Vierendeels, J.; Belloni, F.

    2017-01-01

    Highlights: • A system thermal-hydraulic/CFD coupling methodology is proposed for high-fidelity transient flow analyses. • The method is based on domain decomposition and implicit numerical scheme. • A novel interface Quasi-Newton algorithm is implemented to improve stability and convergence rate. • Preliminary validation analyses on the TALL-3D experiment. - Abstract: The paper describes the development and validation of a coupling methodology between the best-estimate system thermal-hydraulic code RELAP5-3D and the CFD code FLUENT, conceived for high fidelity plant-scale safety analyses of pool-type reactors. The computational tool is developed to assess the impact of three-dimensional phenomena occurring in accidental transients such as loss of flow (LOF) in the research reactor MYRRHA, currently in the design phase at the Belgian Nuclear Research Centre, SCK• CEN. A partitioned, implicit domain decomposition coupling algorithm is implemented, in which the coupled domains exchange thermal-hydraulics variables at coupling boundary interfaces. Numerical stability and interface convergence rates are improved by a novel interface Quasi-Newton algorithm, which is compared in this paper with previously tested numerical schemes. The developed computational method has been assessed for validation purposes against the experiment performed at the test facility TALL-3D, operated by the Royal Institute of Technology (KTH) in Sweden. This paper details the results of the simulation of a loss of forced convection test, showing the capability of the developed methodology to predict transients influenced by local three-dimensional phenomena.

  19. Accuracy of SIAscopy for pigmented skin lesions encountered in primary care: development and validation of a new diagnostic algorithm.

    Science.gov (United States)

    Emery, Jon D; Hunter, Judith; Hall, Per N; Watson, Anthony J; Moncrieff, Marc; Walter, Fiona M

    2010-09-25

    Diagnosing pigmented skin lesions in general practice is challenging. SIAscopy has been shown to increase diagnostic accuracy for melanoma in referred populations. We aimed to develop and validate a scoring system for SIAscopic diagnosis of pigmented lesions in primary care. This study was conducted in two consecutive settings in the UK and Australia, and occurred in three stages: 1) Development of the primary care scoring algorithm (PCSA) on a sub-set of lesions from the UK sample; 2) Validation of the PCSA on a different sub-set of lesions from the same UK sample; 3) Validation of the PCSA on a new set of lesions from an Australian primary care population. Patients presenting with a pigmented lesion were recruited from 6 general practices in the UK and 2 primary care skin cancer clinics in Australia. The following data were obtained for each lesion: clinical history; SIAscan; digital photograph; and digital dermoscopy. SIAscans were interpreted by an expert and validated against histopathology where possible, or expert clinical review of all available data for each lesion. A total of 858 patients with 1,211 lesions were recruited. Most lesions were benign naevi (64.8%) or seborrhoeic keratoses (22.1%); 1.2% were melanoma. The original SIAscopic diagnostic algorithm did not perform well because of the higher prevalence of seborrhoeic keratoses and haemangiomas seen in primary care. A primary care scoring algorithm (PCSA) was developed to account for this. In the UK sample the PCSA had the following characteristics for the diagnosis of 'suspicious': sensitivity 0.50 (0.18-0.81); specificity 0.84 (0.78-0.88); PPV 0.09 (0.03-0.22); NPV 0.98 (0.95-0.99). In the Australian sample the PCSA had the following characteristics for the diagnosis of 'suspicious': sensitivity 0.44 (0.32-0.58); specificity 0.95 (0.93-0.97); PPV 0.52 (0.38-0.66); NPV 0.95 (0.92-0.96). In an analysis of lesions for which histological diagnosis was available (n = 111), the PCSA had a significantly

  20. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  1. Testing Pneumonia Vaccines in the Elderly: Determining a Case Definition for Pneumococcal Pneumonia in the Absence of a Gold Standard.

    Science.gov (United States)

    Jokinen, Jukka; Snellman, Marja; Palmu, Arto A; Saukkoriipi, Annika; Verlant, Vincent; Pascal, Thierry; Devaster, Jeanne-Marie; Hausdorff, William P; Kilpi, Terhi M

    2017-12-15

    Clinical assessments of vaccines to prevent pneumococcal (Pnc) community-acquired pneumonia (CAP) require sensitive and specific case definitions, but there is no gold standard diagnostic test. To develop a new case definition suitable for vaccine efficacy studies, we applied latent class analysis (LCA) to the results from seven diagnostic tests for Pnc etiology on clinical specimens from 323 elderly radiologically-confirmed pneumonia cases enrolled in The Finnish Community-Acquired Pneumonia Epidemiology study during 2005-2007. Compared to the conventional use of LCA, which is mainly to determine sensitivities and specificities of different tests, we instead used LCA as an appropriate instrument to predict the probability of Pnc etiology for each CAP case based on their test profiles, and utilized the predictions to minimize the sample size that would be needed for a vaccine efficacy trial. When compared to the conventional laboratory criteria of encapsulated Pnc in blood culture or in high-quality sputum culture or urine antigen positivity, our optimized case definition for PncCAP resulted in a trial sample size which was almost 20,000 subjects smaller. We believe that our novel application of LCA detailed here to determine a case definition for PncCAP could also be similarly applied to other diseases without a gold standard. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  2. Identifying Primary Spontaneous Pneumothorax from Administrative Databases: A Validation Study

    Directory of Open Access Journals (Sweden)

    Eric Frechette

    2016-01-01

    Full Text Available Introduction. Primary spontaneous pneumothorax (PSP is a disorder commonly encountered in healthy young individuals. There is no differentiation between PSP and secondary pneumothorax (SP in the current version of the International Classification of Diseases (ICD-10. This complicates the conduct of epidemiological studies on the subject. Objective. To validate the accuracy of an algorithm that identifies cases of PSP from administrative databases. Methods. The charts of 150 patients who consulted the emergency room (ER with a recorded main diagnosis of pneumothorax were reviewed to define the type of pneumothorax that occurred. The corresponding hospital administrative data collected during previous hospitalizations and ER visits were processed through the proposed algorithm. The results were compared over two different age groups. Results. There were 144 cases of pneumothorax correctly coded (96%. The results obtained from the PSP algorithm demonstrated a significantly higher sensitivity (97% versus 81%, p=0.038 and positive predictive value (87% versus 46%, p<0.001 in patients under 40 years of age than in older patients. Conclusions. The proposed algorithm is adequate to identify cases of PSP from administrative databases in the age group classically associated with the disease. This makes possible its utilization in large population-based studies.

  3. Preliminary evaluation of the MLAA algorithm with the Philips Ingenuity PET/MR

    International Nuclear Information System (INIS)

    Lougovski, Alexandr; Schramm, Georg; Maus, Jens; Hofheinz, Frank; Ho, Jörg van den

    2014-01-01

    Combined PET/MR is a promising tool for simultaneous investigation of soft tissue morphology and function. However, contrary to CT, MR images do not provide information on photon attenuation in tissue. In the currently available systems issue is solved by synthesizing attenuation maps from MR images using segmentation algorithms. This approach has been shown to provide reason-able results in most cases. However, sporadically occurring segmentation errors can cause serious problems. Recently, algorithms for simultaneous estimation of attenuation and tracer distribution (MLAA) have been introduced. So far, validity of MLAA has mainly been demonstrated in simulated data. We have integrated the MLAA algorithm [2] into the THOR reconstruction []. An evaluation of MLAA was performed using both phantom and patient data acquired with the Ingenuity PET/MR.

  4. Molecular phylogenetic trees - On the validity of the Goodman-Moore augmentation algorithm

    Science.gov (United States)

    Holmquist, R.

    1979-01-01

    A response is made to the reply of Nei and Tateno (1979) to the letter of Holmquist (1978) supporting the validity of the augmentation algorithm of Moore (1977) in reconstructions of nucleotide substitutions by means of the maximum parsimony principle. It is argued that the overestimation of the augmented numbers of nucleotide substitutions (augmented distances) found by Tateno and Nei (1978) is due to an unrepresentative data sample and that it is only necessary that evolution be stochastically uniform in different regions of the phylogenetic network for the augmentation method to be useful. The importance of the average value of the true distance over all links is explained, and the relative variances of the true and augmented distances are calculated to be almost identical. The effects of topological changes in the phylogenetic tree on the augmented distance and the question of the correctness of ancestral sequences inferred by the method of parsimony are also clarified.

  5. Evaluation of Operational Albedo Algorithms For AVHRR, MODIS and VIIRS: Case Studies in Southern Africa

    Science.gov (United States)

    Privette, J. L.; Schaaf, C. B.; Saleous, N.; Liang, S.

    2004-12-01

    Shortwave broadband albedo is the fundamental surface variable that partitions solar irradiance into energy available to the land biophysical system and energy reflected back into the atmosphere. Albedo varies with land cover, vegetation phenological stage, surface wetness, solar angle, and atmospheric condition, among other variables. For these reasons, a consistent and normalized albedo time series is needed to accurately model weather, climate and ecological trends. Although an empirically-derived coarse-scale albedo from the 20-year NOAA AVHRR record (Sellers et al., 1996) is available, an operational moderate resolution global product first became available from NASA's MODIS sensor. The validated MODIS product now provides the benchmark upon which to compare albedo generated through 1) reprocessing of the historic AVHRR record and 2) operational processing of data from the future National Polar-Orbiting Environmental Satellite System's (NPOESS) Visible/Infrared Imager Radiometer Suite (VIIRS). Unfortunately, different instrument characteristics (e.g., spectral bands, spatial resolution), processing approaches (e.g., latency requirements, ancillary data availability) and even product definitions (black sky albedo, white sky albedo, actual or blue sky albedo) complicate the development of the desired multi-mission (AVHRR to MODIS to VIIRS) albedo time series -- a so-called Climate Data Record. This presentation will describe the different albedo algorithms used with AVHRR, MODIS and VIIRS, and compare their results against field measurements collected over two semi-arid sites in southern Africa. We also describe the MODIS-derived VIIRS proxy data we developed to predict NPOESS albedo characteristics. We conclude with a strategy to develop a seamless Climate Data Record from 1982- to 2020.

  6. Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms

    Directory of Open Access Journals (Sweden)

    Jose R. Celaya

    2013-01-01

    Full Text Available As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.

  7. Impact of revising the National Nosocomial Infection Surveillance System definition for catheter-related bloodstream infection in ICU: reproducibility of the National Healthcare Safety Network case definition in an Australian cohort of infection control professionals.

    Science.gov (United States)

    Worth, Leon J; Brett, Judy; Bull, Ann L; McBryde, Emma S; Russo, Philip L; Richards, Michael J

    2009-10-01

    Effective and comparable surveillance for central venous catheter-related bloodstream infections (CLABSIs) in the intensive care unit requires a reproducible case definition that can be readily applied by infection control professionals. Using a questionnaire containing clinical cases, reproducibility of the National Nosocomial Infection Surveillance System (NNIS) surveillance definition for CLABSI was assessed in an Australian cohort of infection control professionals participating in the Victorian Hospital Acquired Infection Surveillance System (VICNISS). The same questionnaire was then used to evaluate the reproducibility of the National Healthcare Safety Network (NHSN) surveillance definition for CLABSI. Target hospitals were defined as large metropolitan (1A) or other large hospitals (non-1A), according to the Victorian Department of Human Services. Questionnaire responses of Centers for Disease Control and Prevention NHSN surveillance experts were used as gold standard comparator. Eighteen of 21 eligible VICNISS centers participated in the survey. Overall concordance with the gold standard was 57.1%, and agreement was highest for 1A hospitals (60.6%). The proportion of congruently classified cases varied according to NNIS criteria: criterion 1 (recognized pathogen), 52.8%; criterion 2a (skin contaminant in 2 or more blood cultures), 83.3%; criterion 2b (skin contaminant in 1 blood culture and appropriate antimicrobial therapy instituted), 58.3%; non-CLABSI cases, 51.4%. When survey questions regarding identification of cases of CLABSI criterion 2b were removed (consistent with the current NHSN definition), overall percentage concordance increased to 62.5% (72.2% for 1A centers). Further educational interventions are required to improve the discrimination of primary and secondary causes of bloodstream infection in Victorian intensive care units. Although reproducibility of the CLABSI case definition is relatively poor, adoption of the revised NHSN definition

  8. A comparison of updating algorithms for large N reduced models

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); Departamento de Física Teórica, C-XI Universidad Autónoma de Madrid,E-28049 Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Core of Research for the Energetic Universe, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ramos, Alberto [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland)

    2015-06-29

    We investigate Monte Carlo updating algorithms for simulating SU(N) Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole SU(N) matrix at once, or iterating through SU(2) subgroups of the SU(N) matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  9. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  10. Evidence-Based Diagnostic Algorithm for Glioma: Analysis of the Results of Pathology Panel Review and Molecular Parameters of EORTC 26951 and 26882 Trials.

    Science.gov (United States)

    Kros, Johan M; Huizer, Karin; Hernández-Laín, Aurelio; Marucci, Gianluca; Michotte, Alex; Pollo, Bianca; Rushing, Elisabeth J; Ribalta, Teresa; French, Pim; Jaminé, David; Bekka, Nawal; Lacombe, Denis; van den Bent, Martin J; Gorlia, Thierry

    2015-06-10

    With the rapid discovery of prognostic and predictive molecular parameters for glioma, the status of histopathology in the diagnostic process should be scrutinized. Our project aimed to construct a diagnostic algorithm for gliomas based on molecular and histologic parameters with independent prognostic values. The pathology slides of 636 patients with gliomas who had been included in EORTC 26951 and 26882 trials were reviewed using virtual microscopy by a panel of six neuropathologists who independently scored 18 histologic features and provided an overall diagnosis. The molecular data for IDH1, 1p/19q loss, EGFR amplification, loss of chromosome 10 and chromosome arm 10q, gain of chromosome 7, and hypermethylation of the promoter of MGMT were available for some of the cases. The slides were divided in discovery (n = 426) and validation sets (n = 210). The diagnostic algorithm resulting from analysis of the discovery set was validated in the latter. In 66% of cases, consensus of overall diagnosis was present. A diagnostic algorithm consisting of two molecular markers and one consensus histologic feature was created by conditional inference tree analysis. The order of prognostic significance was: 1p/19q loss, EGFR amplification, and astrocytic morphology, which resulted in the identification of four diagnostic nodes. Validation of the nodes in the validation set confirmed the prognostic value (P diagnostic algorithm for anaplastic glioma based on multivariable analysis of consensus histopathology and molecular parameters. © 2015 by American Society of Clinical Oncology.

  11. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Directory of Open Access Journals (Sweden)

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  12. Validation of Varian's AAA algorithm with focus on lung treatments

    International Nuclear Information System (INIS)

    Roende, Heidi S.; Hoffmann, Lone

    2009-01-01

    The objective of this study was to examine the accuracy of the Anisotropic Analytical Algorithm (AAA). A variety of different field configurations in homogeneous and in inhomogeneous media (lung geometry) was tested for the AAA algorithm. It was also tested against the present Pencil Beam Convolution (PBC) algorithm. Materials and methods. Two dimensional (2D) dose distributions were measured for a variety of different field configurations in solid water with a 2D array of ion chambers. The dose distributions of patient specific treatment plans in selected transversal slices were measured in a Thorax lung phantom with Gafchromic dosimetry films. A Farmer ion chamber was used to check point doses in the Thorax phantom. The 2D dose distributions were evaluated with a gamma criterion of 3% in dose and 3 mm distance to agreement (DTA) for the 2D array measurements and for the film measurements. Results. For AAA, all fields tested in homogeneous media fulfilled the criterion, except asymmetric fields with wedges and intensity modulated plans where deviations of 5 and 4%, respectively, were seen. Overall, the measured and calculated 2D dose distributions for AAA in the Thorax phantom showed good agreement - both for 6 and 15 MV photons. More than 80% of the points in the high dose regions met the gamma criterion, though it failed at low doses and at gradients. For the PBC algorithm only 30-70% of the points met the gamma criterion. Conclusion. The AAA algorithm has been shown to be superior to the PBC algorithm in heterogeneous media, especially for 15 MV. For most treatment plans the deviations in the lung and the mediastinum regions are below 3%. However, the algorithm may underestimate the dose to the spinal cord by up to 7%

  13. Calibration and Validation Parameter of Hydrologic Model HEC-HMS using Particle Swarm Optimization Algorithms – Single Objective

    Directory of Open Access Journals (Sweden)

    R. Garmeh

    2016-02-01

    model that simulates both wet and dry weatherbehavior.Programming of HEC –HMS has been done by MATLAB and techniques such as elite mutation and creating confusion have been used in order to strengthen the algorithm and improve the results. The event-based HEC-HMS model simulatesthe precipitation-runoff process for each set of parameter values generated by PSO. Turbulentand elitism with mutation are also employed to deal with PSO premature convergence. The integrated PSO-HMS model is tested on the Kardeh dam basin located in the Khorasan Razavi province. Results and Discussion: Input parameters of hydrologic models are seldomknown with certainty. Therefore, they are not capable ofdescribing the exact hydrologic processes. Input data andstructural uncertainties related to scale and approximationsin system processes are different sources of uncertainty thatmake it difficult to model exact hydrologic phenomena.In automatic calibration, the parameter values dependon the objective function of the search or optimization algorithm.In characterizing a runoff hydrograph, threecharacteristics of time-to-peak, peak of discharge and totalrunoff volume are of the most importance. It is thereforeimportant that we simulate and observe hydrographs matchas much as possible in terms of those characteristics. Calibration was carried out in single objective cases. Model calibration in single-objective approach with regard to the objective function in the event of NASH and RMSE were conducted separately.The results indicated that the capability of the model was calibrated to an acceptable level of events. Continuing calibration results were evaluated by four different criteria.Finally, to validate the model parameters with those obtained from the calibration, tests perfomed indicated poor results. Although, based on the calibration and verification of individual events one event remains, suggesting set is a possible parameter. Conclusion: All events were evaluated by validations and the

  14. Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII

    International Nuclear Information System (INIS)

    McKinney, Gregg W.

    2012-01-01

    Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.

  15. Calculating Graph Algorithms for Dominance and Shortest Path

    DEFF Research Database (Denmark)

    Sergey, Ilya; Midtgaard, Jan; Clarke, Dave

    2012-01-01

    We calculate two iterative, polynomial-time graph algorithms from the literature: a dominance algorithm and an algorithm for the single-source shortest path problem. Both algorithms are calculated directly from the definition of the properties by fixed-point fusion of (1) a least fixed point...... expressing all finite paths through a directed graph and (2) Galois connections that capture dominance and path length. The approach illustrates that reasoning in the style of fixed-point calculus extends gracefully to the domain of graph algorithms. We thereby bridge common practice from the school...... of program calculation with common practice from the school of static program analysis, and build a novel view on iterative graph algorithms as instances of abstract interpretation...

  16. Identifying Psoriasis and Psoriatic Arthritis Patients in Retrospective Databases When Diagnosis Codes Are Not Available: A Validation Study Comparing Medication/Prescriber Visit-Based Algorithms with Diagnosis Codes.

    Science.gov (United States)

    Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M

    2018-01-01

    Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. GOCI Yonsei aerosol retrieval version 2 products: an improved algorithm and error analysis with uncertainty estimation from 5-year validation over East Asia

    Science.gov (United States)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Holben, Brent; Eck, Thomas F.; Li, Zhengqiang; Song, Chul H.

    2018-01-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed to retrieve hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD had accuracy comparable to ground-based and other satellite-based observations but still had errors because of uncertainties in surface reflectance and simple cloud masking. In addition, near-real-time (NRT) processing was not possible because a monthly database for each year encompassing the day of retrieval was required for the determination of surface reflectance. This study describes the improved GOCI YAER algorithm version 2 (V2) for NRT processing with improved accuracy based on updates to the cloud-masking and surface-reflectance calculations using a multi-year Rayleigh-corrected reflectance and wind speed database, and inversion channels for surface conditions. The improved GOCI AOD τG is closer to that of the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD than was the case for AOD from the YAER V1 algorithm. The V2 τG has a lower median bias and higher ratio within the MODIS expected error range (0.60 for land and 0.71 for ocean) compared with V1 (0.49 for land and 0.62 for ocean) in a validation test against Aerosol Robotic Network (AERONET) AOD τA from 2011 to 2016. A validation using the Sun-Sky Radiometer Observation Network (SONET) over China shows similar results. The bias of error (τG - τA) is within -0.1 and 0.1, and it is a function of AERONET AOD and Ångström exponent (AE), scattering angle, normalized difference vegetation index (NDVI), cloud fraction and homogeneity of retrieved AOD, and observation time, month, and year. In addition, the diagnostic and prognostic expected error (PEE) of τG are estimated. The estimated PEE of GOCI V2 AOD is well correlated with the actual error over East Asia, and the GOCI V2 AOD over South

  18. An Algorithm for Glaucoma Screening in Clinical Settings and Its Preliminary Performance Profile

    Directory of Open Access Journals (Sweden)

    S-Farzad Mohammadi

    2013-01-01

    Full Text Available Purpose: To devise and evaluate a screening algorithm for glaucoma in clinical settings. Methods: Screening included examination of the optic disc for vertical cupping (≥0.4 and asymmetry (≥0.15, Goldmann applanation tonometry (≥21 mmHg, adjusted or unadjusted for central corneal thickness, and automated perimetry. In the diagnostic step, retinal nerve fiber layer imaging was performed using scanning laser polarimetry. Performance of the screening protocol was assessed in an eye hospital-based program in which 124 non-physician personnel aged 40 years or above were examined. A single ophthalmologist carried out the examinations and in equivocal cases, a glaucoma subspecialist′s opinion was sought. Results: Glaucoma was diagnosed in six cases (prevalence 4.8%; 95% confidence interval, 0.01-0.09 of whom five were new. The likelihood of making a definite diagnosis of glaucoma for those who were screened positively was 8.5 times higher than the estimated baseline risk for the reference population; the positive predictive value of the screening protocol was 30%. Screening excluded 80% of the initial population. Conclusion: Application of a formal screening protocol (such as our algorithm or its equivalent in clinical settings can be helpful in detecting new cases of glaucoma. Preliminary performance assessment of the algorithm showed its applicability and effectiveness in detecting glaucoma among subjects without any visual complaint.

  19. Semi-definite Programming: methods and algorithms for energy management

    International Nuclear Information System (INIS)

    Gorge, Agnes

    2013-01-01

    The present thesis aims at exploring the potentialities of a powerful optimization technique, namely Semi-definite Programming, for addressing some difficult problems of energy management. We pursue two main objectives. The first one consists of using SDP to provide tight relaxations of combinatorial and quadratic problems. A first relaxation, called 'standard' can be derived in a generic way but it is generally desirable to reinforce them, by means of tailor-made tools or in a systematic fashion. These two approaches are implemented on different models of the Nuclear Outages Scheduling Problem, a famous combinatorial problem. We conclude this topic by experimenting the Lasserre's hierarchy on this problem, leading to a sequence of semi-definite relaxations whose optimal values tends to the optimal value of the initial problem. The second objective deals with the use of SDP for the treatment of uncertainty. We investigate an original approach called 'distributionally robust optimization', that can be seen as a compromise between stochastic and robust optimization and admits approximations under the form of a SDP. We compare the benefits of this method w.r.t classical approaches on a demand/supply equilibrium problem. Finally, we propose a scheme for deriving SDP relaxations of MISOCP and we report promising computational results indicating that the semi-definite relaxation improves significantly the continuous relaxation, while requiring a reasonable computational effort. SDP therefore proves to be a promising optimization method that offers great opportunities for innovation in energy management. (author)

  20. How do cognitively impaired elderly patients define "testament": reliability and validity of the testament definition scale.

    Science.gov (United States)

    Heinik, J; Werner, P; Lin, R

    1999-01-01

    The testament definition scale (TDS) is a specifically designed six-item scale aimed at measuring the respondent's capacity to define "testament." We assessed the reliability and validity of this new short scale in 31 community-dwelling cognitively impaired elderly patients. Interrater reliability for the six items ranged from .87 to .97. The interrater reliability for the total score was .77. Significant correlations were found between the TDS score and the Mini-Mental State Examination (MMSE) and the Cambridge Cognitive Examination scores (r = .71 and .72 respectively, p = .001). Criterion validity yielded significantly different means for subjects with MMSE scores of 24-30 and 0-23: mean 3.9 and 1.6 respectively (t(20) = 4.7, p = .001). Using a cutoff point of 0-2 vs. 3+, 79% of the subjects were correctly classified as severely cognitively impaired, with only 8.3% false positives, and a positive predictive value of 94%. Thus, TDS was found both reliable and valid. This scale, however, is not synonymous with testamentary capacity. The discussion deals with the methodological limitations of this study, and highlights the practical as well as the theoretical relevance of TDS. Future studies are warranted to elucidate the relationships between TDS and existing legal requirements of testamentary capacity.

  1. Cartesian product of hypergraphs: properties and algorithms

    Directory of Open Access Journals (Sweden)

    Alain Bretto

    2009-09-01

    Full Text Available Cartesian products of graphs have been studied extensively since the 1960s. They make it possible to decrease the algorithmic complexity of problems by using the factorization of the product. Hypergraphs were introduced as a generalization of graphs and the definition of Cartesian products extends naturally to them. In this paper, we give new properties and algorithms concerning coloring aspects of Cartesian products of hypergraphs. We also extend a classical prime factorization algorithm initially designed for graphs to connected conformal hypergraphs using 2-sections of hypergraphs.

  2. Acute respiratory distress syndrome: the Berlin Definition.

    Science.gov (United States)

    Ranieri, V Marco; Rubenfeld, Gordon D; Thompson, B Taylor; Ferguson, Niall D; Caldwell, Ellen; Fan, Eddy; Camporota, Luigi; Slutsky, Arthur S

    2012-06-20

    The acute respiratory distress syndrome (ARDS) was defined in 1994 by the American-European Consensus Conference (AECC); since then, issues regarding the reliability and validity of this definition have emerged. Using a consensus process, a panel of experts convened in 2011 (an initiative of the European Society of Intensive Care Medicine endorsed by the American Thoracic Society and the Society of Critical Care Medicine) developed the Berlin Definition, focusing on feasibility, reliability, validity, and objective evaluation of its performance. A draft definition proposed 3 mutually exclusive categories of ARDS based on degree of hypoxemia: mild (200 mm Hg < PaO2/FIO2 ≤ 300 mm Hg), moderate (100 mm Hg < PaO2/FIO2 ≤ 200 mm Hg), and severe (PaO2/FIO2 ≤ 100 mm Hg) and 4 ancillary variables for severe ARDS: radiographic severity, respiratory system compliance (≤40 mL/cm H2O), positive end-expiratory pressure (≥10 cm H2O), and corrected expired volume per minute (≥10 L/min). The draft Berlin Definition was empirically evaluated using patient-level meta-analysis of 4188 patients with ARDS from 4 multicenter clinical data sets and 269 patients with ARDS from 3 single-center data sets containing physiologic information. The 4 ancillary variables did not contribute to the predictive validity of severe ARDS for mortality and were removed from the definition. Using the Berlin Definition, stages of mild, moderate, and severe ARDS were associated with increased mortality (27%; 95% CI, 24%-30%; 32%; 95% CI, 29%-34%; and 45%; 95% CI, 42%-48%, respectively; P < .001) and increased median duration of mechanical ventilation in survivors (5 days; interquartile [IQR], 2-11; 7 days; IQR, 4-14; and 9 days; IQR, 5-17, respectively; P < .001). Compared with the AECC definition, the final Berlin Definition had better predictive validity for mortality, with an area under the receiver operating curve of 0.577 (95% CI, 0.561-0.593) vs 0.536 (95% CI, 0.520-0.553; P

  3. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  4. Validation of clinical testing for warfarin sensitivity: comparison of CYP2C9-VKORC1 genotyping assays and warfarin-dosing algorithms.

    Science.gov (United States)

    Langley, Michael R; Booker, Jessica K; Evans, James P; McLeod, Howard L; Weck, Karen E

    2009-05-01

    Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 -1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses.

  5. Muscular Dystrophy Surveillance Tracking and Research Network (MD STARnet): case definition in surveillance for childhood-onset Duchenne/Becker muscular dystrophy.

    Science.gov (United States)

    Mathews, Katherine D; Cunniff, Chris; Kantamneni, Jiji R; Ciafaloni, Emma; Miller, Timothy; Matthews, Dennis; Cwik, Valerie; Druschel, Charlotte; Miller, Lisa; Meaney, F John; Sladky, John; Romitti, Paul A

    2010-09-01

    The Muscular Dystrophy Surveillance Tracking and Research Network (MD STARnet) is a multisite collaboration to determine the prevalence of childhood-onset Duchenne/Becker muscular dystrophy and to characterize health care and health outcomes in this population. MD STARnet uses medical record abstraction to identify patients with Duchenne/Becker muscular dystrophy born January 1, 1982 or later who resided in 1 of the participating sites. Critical diagnostic elements of each abstracted record are reviewed independently by >4 clinicians and assigned to 1 of 6 case definition categories (definite, probable, possible, asymptomatic, female, not Duchenne/Becker muscular dystrophy) by consensus. As of November 2009, 815 potential cases were reviewed. Of the cases included in analysis, 674 (82%) were either ''definite'' or ''probable'' Duchenne/Becker muscular dystrophy. These data reflect a change in diagnostic testing, as case assignment based on genetic testing increased from 67% in the oldest cohort (born 1982-1987) to 94% in the cohort born 2004 to 2009.

  6. Consensus Guidelines for Delineation of Clinical Target Volume for Intensity-Modulated Pelvic Radiotherapy for the Definitive Treatment of Cervix Cancer

    International Nuclear Information System (INIS)

    Lim, Karen; Small, William; Portelance, Lorraine; Creutzberg, Carien; Juergenliemk-Schulz, Ina M.; Mundt, Arno; Mell, Loren K.; Mayr, Nina; Viswanathan, Akila; Jhingran, Anuja; Erickson, Beth; De Los Santos, Jennifer; Gaffney, David; Yashar, Catheryn; Beriwal, Sushil; Wolfson, Aaron

    2011-01-01

    Purpose: Accurate target definition is vitally important for definitive treatment of cervix cancer with intensity-modulated radiotherapy (IMRT), yet a definition of clinical target volume (CTV) remains variable within the literature. The aim of this study was to develop a consensus CTV definition in preparation for a Phase 2 clinical trial being planned by the Radiation Therapy Oncology Group. Methods and Materials: A guidelines consensus working group meeting was convened in June 2008 for the purposes of developing target definition guidelines for IMRT for the intact cervix. A draft document of recommendations for CTV definition was created and used to aid in contouring a clinical case. The clinical case was then analyzed for consistency and clarity of target delineation using an expectation maximization algorithm for simultaneous truth and performance level estimation (STAPLE), with kappa statistics as a measure of agreement between participants. Results: Nineteen experts in gynecological radiation oncology generated contours on axial magnetic resonance images of the pelvis. Substantial STAPLE agreement sensitivity and specificity values were seen for gross tumor volume (GTV) delineation (0.84 and 0.96, respectively) with a kappa statistic of 0.68 (p < 0.0001). Agreement for delineation of cervix, uterus, vagina, and parametria was moderate. Conclusions: This report provides guidelines for CTV definition in the definitive cervix cancer setting for the purposes of IMRT, building on previously published guidelines for IMRT in the postoperative setting.

  7. An upwind algorithm for the parabolized Navier-Stokes equations

    Science.gov (United States)

    Lawrence, S. L.; Tannehill, J. C.; Chaussee, D. S.

    1986-01-01

    A new upwind algorithm based on Roe's scheme has been developed to solve the two-dimensional parabolized Navier-Stokes (PNS) equations. This method does not require the addition of user specified smoothing terms for the capture of discontinuities such as shock waves. Thus, the method is easy to use and can be applied without modification to a wide variety of supersonic flowfields. The advantages and disadvantages of this adaptation are discussed in relation to those of the conventional Beam-Warming scheme in terms of accuracy, stability, computer time and storage, and programming effort. The new algorithm has been validated by applying it to three laminar test cases including flat plate boundary-layer flow, hypersonic flow past a 15 deg compression corner, and hypersonic flow into a converging inlet. The computed results compare well with experiment and show a dramatic improvement in the resolution of flowfield details when compared with the results obtained using the conventional Beam-Warming algorithm.

  8. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  9. Optimal Sensor Placement for Latticed Shell Structure Based on an Improved Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xun Zhang

    2014-01-01

    Full Text Available Optimal sensor placement is a key issue in the structural health monitoring of large-scale structures. However, some aspects in existing approaches require improvement, such as the empirical and unreliable selection of mode and sensor numbers and time-consuming computation. A novel improved particle swarm optimization (IPSO algorithm is proposed to address these problems. The approach firstly employs the cumulative effective modal mass participation ratio to select mode number. Three strategies are then adopted to improve the PSO algorithm. Finally, the IPSO algorithm is utilized to determine the optimal sensors number and configurations. A case study of a latticed shell model is implemented to verify the feasibility of the proposed algorithm and four different PSO algorithms. The effective independence method is also taken as a contrast experiment. The comparison results show that the optimal placement schemes obtained by the PSO algorithms are valid, and the proposed IPSO algorithm has better enhancement in convergence speed and precision.

  10. U-tube steam generator empirical model development and validation using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.

    1992-01-01

    Empirical modeling techniques that use model structures motivated from neural networks research have proven effective in identifying complex process dynamics. A recurrent multilayer perception (RMLP) network was developed as a nonlinear state-space model structure along with a static learning algorithm for estimating the parameter associated with it. The methods developed were demonstrated by identifying two submodels of a U-tube steam generator (UTSG), each valid around an operating power level. A significant drawback of this approach is the long off-line training times required for the development of even a simplified model of a UTSG. Subsequently, a dynamic gradient descent-based learning algorithm was developed as an accelerated alternative to train an RMLP network for use in empirical modeling of power plants. The two main advantages of this learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm were demonstrated via the case study of a simple steam boiler power plant. In this paper, the dynamic gradient descent-based learning algorithm is used for the development and validation of a complete UTSG empirical model

  11. Cost-sensitive case-based reasoning using a genetic algorithm: application to medical diagnosis.

    Science.gov (United States)

    Park, Yoon-Joo; Chun, Se-Hak; Kim, Byung-Chun

    2011-02-01

    The paper studies the new learning technique called cost-sensitive case-based reasoning (CSCBR) incorporating unequal misclassification cost into CBR model. Conventional CBR is now considered as a suitable technique for diagnosis, prognosis and prescription in medicine. However it lacks the ability to reflect asymmetric misclassification and often assumes that the cost of a positive diagnosis (an illness) as a negative one (no illness) is the same with that of the opposite situation. Thus, the objective of this research is to overcome the limitation of conventional CBR and encourage applying CBR to many real world medical cases associated with costs of asymmetric misclassification errors. The main idea involves adjusting the optimal cut-off classification point for classifying the absence or presence of diseases and the cut-off distance point for selecting optimal neighbors within search spaces based on similarity distribution. These steps are dynamically adapted to new target cases using a genetic algorithm. We apply this proposed method to five real medical datasets and compare the results with two other cost-sensitive learning methods-C5.0 and CART. Our finding shows that the total misclassification cost of CSCBR is lower than other cost-sensitive methods in many cases. Even though the genetic algorithm has limitations in terms of unstable results and over-fitting training data, CSCBR results with GA are better overall than those of other methods. Also the paired t-test results indicate that the total misclassification cost of CSCBR is significantly less than C5.0 and CART for several datasets. We have proposed a new CBR method called cost-sensitive case-based reasoning (CSCBR) that can incorporate unequal misclassification costs into CBR and optimize the number of neighbors dynamically using a genetic algorithm. It is meaningful not only for introducing the concept of cost-sensitive learning to CBR, but also for encouraging the use of CBR in the medical area

  12. Clinical and epidemiological features of typhoid fever in Pemba, Zanzibar: assessment of the performance of the WHO case definitions.

    Science.gov (United States)

    Thriemer, Kamala; Ley, Benedikt; Ley, Benedikt B; Ame, Shaali S; Deen, Jaqueline L; Pak, Gi Deok; Chang, Na Yoon; Hashim, Ramadhan; Schmied, Wolfgang Hellmut; Busch, Clara Jana-Lui; Nixon, Shanette; Morrissey, Anne; Puri, Mahesh K; Ochiai, R Leon; Wierzba, Thomas; Clemens, John D; Ali, Mohammad; Jiddawi, Mohammad S; von Seidlein, Lorenz; Ali, Said M

    2012-01-01

    The gold standard for diagnosis of typhoid fever is blood culture (BC). Because blood culture is often not available in impoverished settings it would be helpful to have alternative diagnostic approaches. We therefore investigated the usefulness of clinical signs, WHO case definition and Widal test for the diagnosis of typhoid fever. Participants with a body temperature ≥37.5°C or a history of fever were enrolled over 17 to 22 months in three hospitals on Pemba Island, Tanzania. Clinical signs and symptoms of participants upon presentation as well as blood and serum for BC and Widal testing were collected. Clinical signs and symptoms of typhoid fever cases were compared to other cases of invasive bacterial diseases and BC negative participants. The relationship of typhoid fever cases with rainfall, temperature, and religious festivals was explored. The performance of the WHO case definitions for suspected and probable typhoid fever and a local cut off titre for the Widal test was assessed. 79 of 2209 participants had invasive bacterial disease. 46 isolates were identified as typhoid fever. Apart from a longer duration of fever prior to admission clinical signs and symptoms were not significantly different among patients with typhoid fever than from other febrile patients. We did not detect any significant seasonal patterns nor correlation with rainfall or festivals. The sensitivity and specificity of the WHO case definition for suspected and probable typhoid fever were 82.6% and 41.3% and 36.3 and 99.7% respectively. Sensitivity and specificity of the Widal test was 47.8% and 99.4 both forfor O-agglutinin and H- agglutinin at a cut-off titre of 1:80. Typhoid fever prevalence rates on Pemba are high and its clinical signs and symptoms are non-specific. The sensitivity of the Widal test is low and the WHO case definition performed better than the Widal test.

  13. Defining asthma and assessing asthma outcomes using electronic health record data: a systematic scoping review.

    Science.gov (United States)

    Al Sallakh, Mohammad A; Vasileiou, Eleftheria; Rodgers, Sarah E; Lyons, Ronan A; Sheikh, Aziz; Davies, Gwyneth A

    2017-06-01

    There is currently no consensus on approaches to defining asthma or assessing asthma outcomes using electronic health record-derived data. We explored these approaches in the recent literature and examined the clarity of reporting.We systematically searched for asthma-related articles published between January 1, 2014 and December 31, 2015, extracted the algorithms used to identify asthma patients and assess severity, control and exacerbations, and examined how the validity of these outcomes was justified.From 113 eligible articles, we found significant heterogeneity in the algorithms used to define asthma (n=66 different algorithms), severity (n=18), control (n=9) and exacerbations (n=24). For the majority of algorithms (n=106), validity was not justified. In the remaining cases, approaches ranged from using algorithms validated in the same databases to using nonvalidated algorithms that were based on clinical judgement or clinical guidelines. The implementation of these algorithms was suboptimally described overall.Although electronic health record-derived data are now widely used to study asthma, the approaches being used are significantly varied and are often underdescribed, rendering it difficult to assess the validity of studies and compare their findings. Given the substantial growth in this body of literature, it is crucial that scientific consensus is reached on the underlying definitions and algorithms. Copyright ©ERS 2017.

  14. A Dynamic Fuzzy Cluster Algorithm for Time Series

    Directory of Open Access Journals (Sweden)

    Min Ji

    2013-01-01

    clustering time series by introducing the definition of key point and improving FCM algorithm. The proposed algorithm works by determining those time series whose class labels are vague and further partitions them into different clusters over time. The main advantage of this approach compared with other existing algorithms is that the property of some time series belonging to different clusters over time can be partially revealed. Results from simulation-based experiments on geographical data demonstrate the excellent performance and the desired results have been obtained. The proposed algorithm can be applied to solve other clustering problems in data mining.

  15. Validation of Material Algorithms for Femur Remodelling Using Medical Image Data

    Directory of Open Access Journals (Sweden)

    Shitong Luo

    2017-01-01

    Full Text Available The aim of this study is the utilization of human medical CT images to quantitatively evaluate two sorts of “error-driven” material algorithms, that is, the isotropic and orthotropic algorithms, for bone remodelling. The bone remodelling simulations were implemented by a combination of the finite element (FE method and the material algorithms, in which the bone material properties and element axes are determined by both loading amplitudes and daily cycles with different weight factor. The simulation results showed that both algorithms produced realistic distribution in bone amount, when compared with the standard from CT data. Moreover, the simulated L-T ratios (the ratio of longitude modulus to transverse modulus by the orthotropic algorithm were close to the reported results. This study suggests a role for “error-driven” algorithm in bone material prediction in abnormal mechanical environment and holds promise for optimizing implant design as well as developing countermeasures against bone loss due to weightlessness. Furthermore, the quantified methods used in this study can enhance bone remodelling model by optimizing model parameters to gap the discrepancy between the simulation and real data.

  16. Sorting variables for each case: a new algorithm to calculate injury severity score (ISS) using SPSS-PC.

    Science.gov (United States)

    Linn, S

    One of the more often used measures of multiple injuries is the injury severity score (ISS). Determination of the ISS is based on the abbreviated injury scale (AIS). This paper suggests a new algorithm to sort the AISs for each case and calculate ISS. The program uses unsorted abbreviated injury scale (AIS) levels for each case and rearranges them in descending order. The first three sorted AISs representing the three most severe injuries of a person are then used to calculate injury severity score (ISS). This algorithm should be useful for analyses of clusters of injuries especially when more patients have multiple injuries.

  17. 45 CFR 1609.2 - Definition.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Definition. 1609.2 Section 1609.2 Public Welfare Regulations Relating to Public Welfare (Continued) LEGAL SERVICES CORPORATION FEE-GENERATING CASES § 1609.2 Definition. (a) Fee-generating case means any case or matter which, if undertaken on behalf of an eligible...

  18. Definitions and validation criteria for biomarkers and surrogate endpoints: development and testing of a quantitative hierarchical levels of evidence schema.

    Science.gov (United States)

    Lassere, Marissa N; Johnson, Kent R; Boers, Maarten; Tugwell, Peter; Brooks, Peter; Simon, Lee; Strand, Vibeke; Conaghan, Philip G; Ostergaard, Mikkel; Maksymowych, Walter P; Landewe, Robert; Bresnihan, Barry; Tak, Paul-Peter; Wakefield, Richard; Mease, Philip; Bingham, Clifton O; Hughes, Michael; Altman, Doug; Buyse, Marc; Galbraith, Sally; Wells, George

    2007-03-01

    There are clear advantages to using biomarkers and surrogate endpoints, but concerns about clinical and statistical validity and systematic methods to evaluate these aspects hinder their efficient application. Our objective was to review the literature on biomarkers and surrogates to develop a hierarchical schema that systematically evaluates and ranks the surrogacy status of biomarkers and surrogates; and to obtain feedback from stakeholders. After a systematic search of Medline and Embase on biomarkers, surrogate (outcomes, endpoints, markers, indicators), intermediate endpoints, and leading indicators, a quantitative surrogate validation schema was developed and subsequently evaluated at a stakeholder workshop. The search identified several classification schema and definitions. Components of these were incorporated into a new quantitative surrogate validation level of evidence schema that evaluates biomarkers along 4 domains: Target, Study Design, Statistical Strength, and Penalties. Scores derived from 3 domains the Target that the marker is being substituted for, the Design of the (best) evidence, and the Statistical strength are additive. Penalties are then applied if there is serious counterevidence. A total score (0 to 15) determines the level of evidence, with Level 1 the strongest and Level 5 the weakest. It was proposed that the term "surrogate" be restricted to markers attaining Levels 1 or 2 only. Most stakeholders agreed that this operationalization of the National Institutes of Health definitions of biomarker, surrogate endpoint, and clinical endpoint was useful. Further development and application of this schema provides incentives and guidance for effective biomarker and surrogate endpoint research, and more efficient drug discovery, development, and approval.

  19. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    Science.gov (United States)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative

  20. Validation of a Step Detection Algorithm during Straight Walking and Turning in Patients with Parkinson’s Disease and Older Adults Using an Inertial Measurement Unit at the Lower Back

    Directory of Open Access Journals (Sweden)

    Minh H. Pham

    2017-09-01

    Full Text Available IntroductionInertial measurement units (IMUs positioned on various body locations allow detailed gait analysis even under unconstrained conditions. From a medical perspective, the assessment of vulnerable populations is of particular relevance, especially in the daily-life environment. Gait analysis algorithms need thorough validation, as many chronic diseases show specific and even unique gait patterns. The aim of this study was therefore to validate an acceleration-based step detection algorithm for patients with Parkinson’s disease (PD and older adults in both a lab-based and home-like environment.MethodsIn this prospective observational study, data were captured from a single 6-degrees of freedom IMU (APDM (3DOF accelerometer and 3DOF gyroscope worn on the lower back. Detection of heel strike (HS and toe off (TO on a treadmill was validated against an optoelectronic system (Vicon (11 PD patients and 12 older adults. A second independent validation study in the home-like environment was performed against video observation (20 PD patients and 12 older adults and included step counting during turning and non-turning, defined with a previously published algorithm.ResultsA continuous wavelet transform (cwt-based algorithm was developed for step detection with very high agreement with the optoelectronic system. HS detection in PD patients/older adults, respectively, reached 99/99% accuracy. Similar results were obtained for TO (99/100%. In HS detection, Bland–Altman plots showed a mean difference of 0.002 s [95% confidence interval (CI −0.09 to 0.10] between the algorithm and the optoelectronic system. The Bland–Altman plot for TO detection showed mean differences of 0.00 s (95% CI −0.12 to 0.12. In the home-like assessment, the algorithm for detection of occurrence of steps during turning reached 90% (PD patients/90% (older adults sensitivity, 83/88% specificity, and 88/89% accuracy. The detection of steps during non-turning phases

  1. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  2. Uncertainty analysis of an interfacial area reconstruction algorithm and its application to two group interfacial area transport equation validation

    International Nuclear Information System (INIS)

    Dave, A.J.; Manera, A.; Beyer, M.; Lucas, D.; Prasser, H.-M.

    2016-01-01

    Wire mesh sensors (WMS) are state of the art devices that allow high resolution (in space and time) measurement of 2D void fraction distribution over a wide range of two-phase flow regimes, from bubbly to annular. Data using WMS have been recorded at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) (Lucas et al., 2010; Beyer et al., 2008; Prasser et al., 2003) for a wide combination of superficial gas and liquid velocities, providing an excellent database for advances in two-phase flow modeling. In two-phase flow, the interfacial area plays an integral role in coupling the mass, momentum and energy transport equations of the liquid and gas phase. While current models used in best-estimate thermal-hydraulic codes (e.g. RELAP5, TRACE, TRACG, etc.) are still based on algebraic correlations for the estimation of the interfacial area in different flow regimes, interfacial area transport equations (IATE) have been proposed to predict the dynamic propagation in space and time of interfacial area (Ishii and Hibiki, 2010). IATE models are still under development and the HZDR WMS experimental data provide an excellent basis for the validation and further advance of these models. The current paper is focused on the uncertainty analysis of algorithms used to reconstruct interfacial area densities from the void-fraction voxel data measured using WMS and their application towards validation efforts of two-group IATE models. In previous research efforts, a surface triangularization algorithm has been developed in order to estimate the surface area of individual bubbles recorded with the WMS, and estimate the interfacial area in the given flow condition. In the present paper, synthetically generated bubbles are used to assess the algorithm’s accuracy. As the interfacial area of the synthetic bubbles are defined by user inputs, the error introduced by the algorithm can be quantitatively obtained. The accuracy of interfacial area measurements is characterized for different bubbles

  3. Uncertainty analysis of an interfacial area reconstruction algorithm and its application to two group interfacial area transport equation validation

    Energy Technology Data Exchange (ETDEWEB)

    Dave, A.J., E-mail: akshayjd@umich.edu [Department of Nuclear Engineering and Rad. Sciences, University of Michigan, Ann Arbor, MI 48105 (United States); Manera, A. [Department of Nuclear Engineering and Rad. Sciences, University of Michigan, Ann Arbor, MI 48105 (United States); Beyer, M.; Lucas, D. [Helmholtz-Zentrum Dresden-Rossendorf, Institute of Fluid Dynamics, 01314 Dresden (Germany); Prasser, H.-M. [Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich (Switzerland)

    2016-12-15

    Wire mesh sensors (WMS) are state of the art devices that allow high resolution (in space and time) measurement of 2D void fraction distribution over a wide range of two-phase flow regimes, from bubbly to annular. Data using WMS have been recorded at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) (Lucas et al., 2010; Beyer et al., 2008; Prasser et al., 2003) for a wide combination of superficial gas and liquid velocities, providing an excellent database for advances in two-phase flow modeling. In two-phase flow, the interfacial area plays an integral role in coupling the mass, momentum and energy transport equations of the liquid and gas phase. While current models used in best-estimate thermal-hydraulic codes (e.g. RELAP5, TRACE, TRACG, etc.) are still based on algebraic correlations for the estimation of the interfacial area in different flow regimes, interfacial area transport equations (IATE) have been proposed to predict the dynamic propagation in space and time of interfacial area (Ishii and Hibiki, 2010). IATE models are still under development and the HZDR WMS experimental data provide an excellent basis for the validation and further advance of these models. The current paper is focused on the uncertainty analysis of algorithms used to reconstruct interfacial area densities from the void-fraction voxel data measured using WMS and their application towards validation efforts of two-group IATE models. In previous research efforts, a surface triangularization algorithm has been developed in order to estimate the surface area of individual bubbles recorded with the WMS, and estimate the interfacial area in the given flow condition. In the present paper, synthetically generated bubbles are used to assess the algorithm’s accuracy. As the interfacial area of the synthetic bubbles are defined by user inputs, the error introduced by the algorithm can be quantitatively obtained. The accuracy of interfacial area measurements is characterized for different bubbles

  4. Automated System for Teaching Computational Complexity of Algorithms Course

    Directory of Open Access Journals (Sweden)

    Vadim S. Roublev

    2017-01-01

    Full Text Available This article describes problems of designing automated teaching system for “Computational complexity of algorithms” course. This system should provide students with means to familiarize themselves with complex mathematical apparatus and improve their mathematical thinking in the respective area. The article introduces the technique of algorithms symbol scroll table that allows estimating lower and upper bounds of computational complexity. Further, we introduce a set of theorems that facilitate the analysis in cases when the integer rounding of algorithm parameters is involved and when analyzing the complexity of a sum. At the end, the article introduces a normal system of symbol transformations that allows one both to perform any symbol transformations and simplifies the automated validation of such transformations. The article is published in the authors’ wording.

  5. Recursive definition of global cellular-automata mappings

    DEFF Research Database (Denmark)

    Feldberg, Rasmus; Knudsen, Carsten; Rasmussen, Steen

    1994-01-01

    A method for a recursive definition of global cellular-automata mappings is presented. The method is based on a graphical representation of global cellular-automata mappings. For a given cellular-automaton rule the recursive algorithm defines the change of the global cellular-automaton mapping...... as the number of lattice sites is incremented. A proof of lattice size invariance of global cellular-automata mappings is derived from an approximation to the exact recursive definition. The recursive definitions are applied to calculate the fractal dimension of the set of reachable states and of the set...

  6. Single-centre validation of the EASL-CLIF consortium definition of acute-on-chronic liver failure and CLIF-SOFA for prediction of mortality in cirrhosis.

    Science.gov (United States)

    Silva, Pedro E Soares E; Fayad, Leonardo; Lazzarotto, César; Ronsoni, Marcelo F; Bazzo, Maria L; Colombo, Bruno S; Dantas-Correa, Esther B; Narciso-Schiavon, Janaína L; Schiavon, Leonardo L

    2015-05-01

    The idea of acute-on-chronic liver failure (ACLF) has emerged to identify those subjects with organ failure and high mortality rates. However, the absence of a precise definition has limited the clinical application and research related to the ACLF concept. We sought to validate the ACLF definition and the CLIF-SOFA Score recently proposed by the EASL-CLIF Consortium in a cohort of patients admitted for acute decompensation (AD) of cirrhosis. In this prospective cohort study, patients were followed during their hospital stay and thirty and 90-day mortality was evaluated by phone call, in case of hospital discharge. All subjects underwent laboratory evaluation at admission. Between December 2010 and November 2013, 192 cirrhotic patients were included. At enrollment, 46 patients (24%) met the criteria for ACLF (Grades 1, 2 and 3 in 18%, 4% and 2% respectively). The 30-day mortality was 65% in ACLF group and 12% in the remaining subjects (P Logistic regression analysis showed that 30-day mortality was independently associated with ascites and ACLF at admission. The Kaplan-Meier survival probability at 90-day was 92% in patients without ascites or ACLF and only 22% for patients with both ascites and ACLF. The AUROC of CLIF-SOFA in predicting 30-day mortality was 0.847 ± 0.034, with sensitivity of 64%, specificity of 90% and positive likelihood ratio of 6.61 for values ≥9. In our single-centre experience the CLIF-SOFA and the EASL-CLIF Consortium definition of ACLF proved to be strong predictors of short-term mortality in cirrhotic patients admitted for AD. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Self-management interventions : Proposal and validation of a new operational definition

    NARCIS (Netherlands)

    Jonkman, Nini H; Schuurmans, Marieke J; Jaarsma, Tiny; Shortridge-Baggett, Lillie M; Hoes, Arno W; Trappenburg, Jaap C A

    2016-01-01

    OBJECTIVES: Systematic reviews on complex interventions like self-management interventions often do not explicitly state an operational definition of the intervention studied, which may impact the review's conclusions. This study aimed to propose an operational definition of self-management

  8. Self-management interventions: Proposal and validation of a new operational definition

    NARCIS (Netherlands)

    Jonkman, N.H.; Schuurmans, Marieke J.; Jaarsma, Tiny; Shortbridge-Baggett, Lillie M.; Hoes, Arno W.; Trappenburg, Jaap C A

    2016-01-01

    Objectives: Systematic reviews on complex interventions like self-management interventions often do not explicitly state an operational definition of the intervention studied, which may impact the review's conclusions. This study aimed to propose an operational definition of self-management

  9. Volume definition system for treatment planning

    International Nuclear Information System (INIS)

    Alakuijala, Jyrki; Pekkarinen, Ari; Puurunen, Harri

    1997-01-01

    volume definition process dramatically. Its true 3D nature allows the algorithm to find the regions quickly with high accuracy. The competitive volume growing requires only a small amount of user input for initial seeding. The simultaneous growing of multiple segments mitigates volume leaking, which is a major problem in normal region growing. The automatic tool finds body outline, air, and couch reliably in 30 s for a volume image of 30 slices. Other algorithms are almost interactive. CTV interpolation is an excellent feature for defining a CTV spanning over several slices. Real time verification in 2D and 3D visualization support the operator and thus speed up the contouring process. Conclusions: The contouring process requires a rich set of tools to comply with the multi-faceted requirements of volume definition. Body, specific anatomic volumes and target are best defined using a set of tools specifically built for that purpose. The execution speed of both the algorithms and visualization is very important for operator satisfaction

  10. Gravity Search Algorithm hybridized Recursive Least Square method for power system harmonic estimation

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Singh

    2017-06-01

    Full Text Available This paper presents a new hybrid method based on Gravity Search Algorithm (GSA and Recursive Least Square (RLS, known as GSA-RLS, to solve the harmonic estimation problems in the case of time varying power signals in presence of different noises. GSA is based on the Newton’s law of gravity and mass interactions. In the proposed method, the searcher agents are a collection of masses that interact with each other using Newton’s laws of gravity and motion. The basic GSA algorithm strategy is combined with RLS algorithm sequentially in an adaptive way to update the unknown parameters (weights of the harmonic signal. Simulation and practical validation are made with the experimentation of the proposed algorithm with real time data obtained from a heavy paper industry. A comparative performance of the proposed algorithm is evaluated with other recently reported algorithms like, Differential Evolution (DE, Particle Swarm Optimization (PSO, Bacteria Foraging Optimization (BFO, Fuzzy-BFO (F-BFO hybridized with Least Square (LS and BFO hybridized with RLS algorithm, which reveals that the proposed GSA-RLS algorithm is the best in terms of accuracy, convergence and computational time.

  11. Development of coupled models and their validation against experiments -DECOVALEX project

    International Nuclear Information System (INIS)

    Stephansson, O.; Jing, L.; Kautsky, F.

    1995-01-01

    DECOVALEX is an international co-operative research project for theoretical and experimental studies of coupled thermal, hydrological and mechanical processes in hard rocks. Different mathematical models and computer codes have been developed by research teams from different countries. These models and codes are used to study the so-called Bench Mark Test and Test Case problems developed within this project. Bench-Mark Tests are defined as hypothetical initial-boundary value problems of a generic nature, and Test Cases are experimental investigations of part or full aspects of coupled thermo-hydro-mechanical processes in hard rocks. Analytical and semi-analytical solutions related to coupled T-H-M processes are also developed for problems with simpler geometry and initial-boundary conditions. These solutions are developed to verify algorithms and their computer implementations. In this contribution the motivation, organization and approaches and current status of the project are presented, together with definitions of Bench-Mark Tests and Test Case problems. The definition and part of results for a BMT problem (BMT3) for a near-field repository model are described as an example. (authors). 3 refs., 11 figs., 3 tabs

  12. A benchmark study of the Signed-particle Monte Carlo algorithm for the Wigner equation

    Directory of Open Access Journals (Sweden)

    Muscato Orazio

    2017-12-01

    Full Text Available The Wigner equation represents a promising model for the simulation of electronic nanodevices, which allows the comprehension and prediction of quantum mechanical phenomena in terms of quasi-distribution functions. During these years, a Monte Carlo technique for the solution of this kinetic equation has been developed, based on the generation and annihilation of signed particles. This technique can be deeply understood in terms of the theory of pure jump processes with a general state space, producing a class of stochastic algorithms. One of these algorithms has been validated successfully by numerical experiments on a benchmark test case.

  13. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  14. Preliminary definition of improvement in juvenile arthritis.

    Science.gov (United States)

    Giannini, E H; Ruperto, N; Ravelli, A; Lovell, D J; Felson, D T; Martini, A

    1997-07-01

    To identify a core set of outcome variables for the assessment of children with juvenile arthritis (JA), to use the core set to develop a definition of improvement to determine whether individual patients demonstrate clinically important improvement, and to promote this definition as a single efficacy measure in JA clinical trials. A core set of outcome variables was established using a combination of statistical and consensus formation techniques. Variables in the core set consisted of 1) physician global assessment of disease activity; 2) parent/patient assessment of overall well-being; 3) functional ability; 4) number of joints with active arthritis; 5) number of joints with limited range of motion; and 6) erythrocyte sedimentation rate. To establish a definition of improvement using this core set, 21 pediatric rheumatologists from 14 countries met, and, using consensus formation techniques, scored each of 72 patient profiles as improved or not improved. Using the physicians' consensus as the gold standard, the chi-square, sensitivity, and specificity were calculated for each of 240 possible definitions of improvement. Definitions with sensitivity or specificity of definitions to discriminate between the effects of active agent and those of placebo, using actual trial data, was then observed. Each definition was also ranked for face validity, and the sum of the ranks was then multiplied by the kappa statistic. The definition of improvement with the highest final score was as follows: at least 30% improvement from baseline in 3 of any 6 variables in the core set, with no more than 1 of the remaining variables worsening by >30%. The second highest scoring definition was closely related to the first; the third highest was similar to the Paulus criteria used in adult rheumatoid arthritis trials, except with different variables. This indicates convergent validity of the process used. We propose a definition of improvement for JA. Use of a uniform definition will help

  15. Ultrasonic particle image velocimetry for improved flow gradient imaging: algorithms, methodology and validation

    International Nuclear Information System (INIS)

    Niu Lili; Qian Ming; Yu Wentao; Jin Qiaofeng; Ling Tao; Zheng Hairong; Wan Kun; Gao Shen

    2010-01-01

    This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.

  16. Social validity in single-case research: A systematic literature review of prevalence and application.

    Science.gov (United States)

    Snodgrass, Melinda R; Chung, Moon Y; Meadan, Hedda; Halle, James W

    2018-03-01

    Single-case research (SCR) has been a valuable methodology in special education research. Montrose Wolf (1978), an early pioneer in single-case methodology, coined the term "social validity" to refer to the social importance of the goals selected, the acceptability of procedures employed, and the effectiveness of the outcomes produced in applied investigations. Since 1978, many contributors to SCR have included social validity as a feature of their articles and several authors have examined the prevalence and role of social validity in SCR. We systematically reviewed all SCR published in six highly-ranked special education journals from 2005 to 2016 to establish the prevalence of social validity assessments and to evaluate their scientific rigor. We found relatively low, but stable prevalence with only 28 publications addressing all three factors of the social validity construct (i.e., goals, procedures, outcomes). We conducted an in-depth analysis of the scientific rigor of these 28 publications. Social validity remains an understudied construct in SCR, and the scientific rigor of social validity assessments is often lacking. Implications and future directions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Does a Claims Diagnosis of Autism Mean a True Case?

    Science.gov (United States)

    Burke, James P.; Jain, Anjali; Yang, Wenya; Kelly, Jonathan P.; Kaiser, Marygrace; Becker, Laura; Lawer, Lindsay; Newschaffer, Craig J.

    2014-01-01

    The purpose of this study was to validate autism spectrum disorder cases identified through claims-based case identification algorithms against a clinical review of medical charts. Charts were reviewed for 432 children who fell into one of the three following groups: (a) more than or equal to two claims with an autism spectrum disorder diagnosis…

  18. Validation of a numerical FSI simulation of an aortic BMHV by in vitro PIV experiments.

    Science.gov (United States)

    Annerel, S; Claessens, T; Degroote, J; Segers, P; Vierendeels, J

    2014-08-01

    In this paper, a validation of a recently developed fluid-structure interaction (FSI) coupling algorithm to simulate numerically the dynamics of an aortic bileaflet mechanical heart valve (BMHV) is performed. This validation is done by comparing the numerical simulation results with in vitro experiments. For the in vitro experiments, the leaflet kinematics and flow fields are obtained via the particle image velocimetry (PIV) technique. Subsequently, the same case is numerically simulated by the coupling algorithm and the resulting leaflet kinematics and flow fields are obtained. Finally, the results are compared, revealing great similarity in leaflet motion and flow fields between the numerical simulation and the experimental test. Therefore, it is concluded that the developed algorithm is able to capture very accurately all the major leaflet kinematics and dynamics and can be used to study and optimize the design of BMHVs. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Clinical Criteria Versus a Possible Research Case Definition in Chronic Fatigue Syndrome/Myalgic Encephalomyelitis.

    Science.gov (United States)

    Jason, Leonard A; McManimen, Stephanie; Sunnquist, Madison; Newton, Julia L; Strand, Elin Bolle

    2017-01-01

    The Institute of Medicine (IOM) recently developed clinical criteria for what had been known as chronic fatigue syndrome (CFS). Given the broad nature of the clinical IOM criteria, there is a need for a research definition that would select a more homogenous and impaired group of patients than the IOM clinical criteria. At the present time, it is unclear what will serve as the research definition. The current study focused on a research definition which selected homebound individuals who met the four IOM criteria, excluding medical and psychiatric co-morbidities. Our research criteria were compared to those participants meeting the IOM criteria. Those not meeting either of these criteria sets were placed in a separate group defined by 6 or more months of fatigue. Data analyzed were from the DePaul Symptom Questionnaire and the SF-36. Due to unequal sample sizes and variances, Welch's F tests and Games-Howell post hoc tests were conducted. Using a large database of over 1,000 patients from several countries, we found that those meeting a more restrictive research definition were even more impaired and more symptomatic than those meeting criteria for the other two groups. Deciding on a particular research case definition would allow researchers to select more comparable patient samples across settings, and this would represent one of the most significant methodologic advances for this field of study.

  20. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    Science.gov (United States)

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing

  1. Symmetry and Algorithmic Complexity of Polyominoes and Polyhedral Graphs

    KAUST Repository

    Zenil, Hector

    2018-02-24

    We introduce a definition of algorithmic symmetry able to capture essential aspects of geometric symmetry. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov-Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumeration all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity---both theoretical and numerical---with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize properties of polyominoes, polytopes, regular and quasi-regular polyhedra as well as polyhedral networks, thereby demonstrating its profiling capabilities.

  2. Symmetry and Algorithmic Complexity of Polyominoes and Polyhedral Graphs

    KAUST Repository

    Zenil, Hector; Kiani, Narsis A.; Tegner, Jesper

    2018-01-01

    We introduce a definition of algorithmic symmetry able to capture essential aspects of geometric symmetry. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov-Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumeration all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity---both theoretical and numerical---with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize properties of polyominoes, polytopes, regular and quasi-regular polyhedra as well as polyhedral networks, thereby demonstrating its profiling capabilities.

  3. Sensitivity, Specificity, and Public-Health Utility of Clinical Case Definitions Based on the Signs and Symptoms of Cholera in Africa.

    Science.gov (United States)

    Nadri, Johara; Sauvageot, Delphine; Njanpop-Lafourcade, Berthe-Marie; Baltazar, Cynthia S; Banla Kere, Abiba; Bwire, Godfrey; Coulibaly, Daouda; Kacou N'Douba, Adele; Kagirita, Atek; Keita, Sakoba; Koivogui, Lamine; Landoh, Dadja E; Langa, Jose P; Miwanda, Berthe N; Mutombo Ndongala, Guy; Mwakapeje, Elibariki R; Mwambeta, Jacob L; Mengel, Martin A; Gessner, Bradford D

    2018-04-01

    During 2014, Africa reported more than half of the global suspected cholera cases. Based on the data collected from seven countries in the African Cholera Surveillance Network (Africhol), we assessed the sensitivity, specificity, and positive and negative predictive values of clinical cholera case definitions, including that recommended by the World Health Organization (WHO) using culture confirmation as the gold standard. The study was designed to assess results in real-world field situations in settings with recent cholera outbreaks or endemicity. From June 2011 to July 2015, a total of 5,084 persons with suspected cholera were tested for Vibrio cholerae in seven different countries of which 35.7% had culture confirmation. For all countries combined, the WHO case definition had a sensitivity = 92.7%, specificity = 8.1%, positive predictive value = 36.1%, and negative predictive value = 66.6%. Adding dehydration, vomiting, or rice water stools to the case definition could increase the specificity without a substantial decrease in sensitivity. Future studies could further refine our findings primarily by using more sensitive methods for cholera confirmation.

  4. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  5. Variation between Hospitals with Regard to Diagnostic Practice, Coding Accuracy, and Case-Mix. A Retrospective Validation Study of Administrative Data versus Medical Records for Estimating 30-Day Mortality after Hip Fracture.

    Directory of Open Access Journals (Sweden)

    Jon Helgeland

    Full Text Available The purpose of this study was to assess the validity of patient administrative data (PAS for calculating 30-day mortality after hip fracture as a quality indicator, by a retrospective study of medical records.We used PAS data from all Norwegian hospitals (2005-2009, merged with vital status from the National Registry, to calculate 30-day case-mix adjusted mortality for each hospital (n = 51. We used stratified sampling to establish a representative sample of both hospitals and cases. The hospitals were stratified according to high, low and medium mortality of which 4, 3, and 5 hospitals were sampled, respectively. Within hospitals, cases were sampled stratified according to year of admission, age, length of stay, and vital 30-day status (alive/dead. The final study sample included 1043 cases from 11 hospitals. Clinical information was abstracted from the medical records. Diagnostic and clinical information from the medical records and PAS were used to define definite and probable hip fracture. We used logistic regression analysis in order to estimate systematic between-hospital variation in unmeasured confounding. Finally, to study the consequences of unmeasured confounding for identifying mortality outlier hospitals, a sensitivity analysis was performed.The estimated overall positive predictive value was 95.9% for definite and 99.7% for definite or probable hip fracture, with no statistically significant differences between hospitals. The standard deviation of the additional, systematic hospital bias in mortality estimates was 0.044 on the logistic scale. The effect of unmeasured confounding on outlier detection was small to moderate, noticeable only for large hospital volumes.This study showed that PAS data are adequate for identifying cases of hip fracture, and the effect of unmeasured case mix variation was small. In conclusion, PAS data are adequate for calculating 30-day mortality after hip-fracture as a quality indicator in Norway.

  6. Variation between Hospitals with Regard to Diagnostic Practice, Coding Accuracy, and Case-Mix. A Retrospective Validation Study of Administrative Data versus Medical Records for Estimating 30-Day Mortality after Hip Fracture.

    Science.gov (United States)

    Helgeland, Jon; Kristoffersen, Doris Tove; Skyrud, Katrine Damgaard; Lindman, Anja Schou

    2016-01-01

    The purpose of this study was to assess the validity of patient administrative data (PAS) for calculating 30-day mortality after hip fracture as a quality indicator, by a retrospective study of medical records. We used PAS data from all Norwegian hospitals (2005-2009), merged with vital status from the National Registry, to calculate 30-day case-mix adjusted mortality for each hospital (n = 51). We used stratified sampling to establish a representative sample of both hospitals and cases. The hospitals were stratified according to high, low and medium mortality of which 4, 3, and 5 hospitals were sampled, respectively. Within hospitals, cases were sampled stratified according to year of admission, age, length of stay, and vital 30-day status (alive/dead). The final study sample included 1043 cases from 11 hospitals. Clinical information was abstracted from the medical records. Diagnostic and clinical information from the medical records and PAS were used to define definite and probable hip fracture. We used logistic regression analysis in order to estimate systematic between-hospital variation in unmeasured confounding. Finally, to study the consequences of unmeasured confounding for identifying mortality outlier hospitals, a sensitivity analysis was performed. The estimated overall positive predictive value was 95.9% for definite and 99.7% for definite or probable hip fracture, with no statistically significant differences between hospitals. The standard deviation of the additional, systematic hospital bias in mortality estimates was 0.044 on the logistic scale. The effect of unmeasured confounding on outlier detection was small to moderate, noticeable only for large hospital volumes. This study showed that PAS data are adequate for identifying cases of hip fracture, and the effect of unmeasured case mix variation was small. In conclusion, PAS data are adequate for calculating 30-day mortality after hip-fracture as a quality indicator in Norway.

  7. Improving the Fine-Tuning of Metaheuristics: An Approach Combining Design of Experiments and Racing Algorithms

    Directory of Open Access Journals (Sweden)

    Eduardo Batista de Moraes Barbosa

    2017-01-01

    Full Text Available Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA, which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for fine-tuning two distinct metaheuristics: Simulated Annealing (SA and Genetic Algorithm (GA, in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.

  8. Management of vascular anomalies: Review of institutional management algorithm

    Directory of Open Access Journals (Sweden)

    Lalit K Makhija

    2017-01-01

    Full Text Available Introduction: Vascular anomalies are congenital lesions broadly categorised into vascular tumour (haemangiomas and vascular dysmorphogenesis (vascular malformation. The management of these difficult problems has lately been simplified by the biological classification and multidisciplinary approach. To standardise the treatment protocol, an algorithm has been devised. The study aims to validate the algorithm in terms of its utility and presents our experience in managing vascular anomalies. Materials and Methods: The biological classification of Mulliken and Glowacki was followed. A detailed algorithm for management of vascular anomalies has been devised in the department. The protocol is being practiced by us since the past two decades. The data regarding the types of lesions and treatment modality used were maintained. Results and Conclusion: This study was conducted from 2002 to 2012. A total of 784 cases of vascular anomalies were included in the study of which 196 were haemangiomas and 588 were vascular malformations. The algorithmic approach has brought an element of much-needed objectivity in the management of vascular anomalies. This has helped us to define the management of particular lesion considering its pathology, extent and aesthetic and functional consequences of ablation to a certain extent.

  9. Validating the Western Trauma Association algorithm for managing patients with anterior abdominal stab wounds: a Western Trauma Association multicenter trial.

    Science.gov (United States)

    Biffl, Walter L; Kaups, Krista L; Pham, Tam N; Rowell, Susan E; Jurkovich, Gregory J; Burlew, Clay Cothren; Elterman, J; Moore, Ernest E

    2011-12-01

    The optimal management of stable patients with anterior abdominal stab wounds (AASWs) remains a matter of debate. A recent Western Trauma Association (WTA) multicenter trial found that exclusion of peritoneal penetration by local wound exploration (LWE) allowed immediate discharge (D/C) of 41% of patients with AASWs. Performance of computed tomography (CT) scanning or diagnostic peritoneal lavage (DPL) did not improve the D/C rate; however, these tests led to nontherapeutic (NONTHER) laparotomy (LAP) in 24% and 31% of cases, respectively. An algorithm was proposed that included LWE, followed by either D/C or admission for serial clinical assessments, without further imaging or invasive testing. The purpose of this study was to evaluate the safety and efficacy of the algorithm in providing timely interventions for significant injuries. A multicenter, institutional review board-approved study enrolled patients with AASWs. Management was guided by the WTA AASW algorithm. Data on the presentation, evaluation, and clinical course were recorded prospectively. Two hundred twenty-two patients (94% men, age, 34.7 years ± 0.3 years) were enrolled. Sixty-two (28%) had immediate LAP, of which 87% were therapeutic (THER). Three (1%) died and the mean length of stay (LOS) was 6.9 days. One hundred sixty patients were stable and asymptomatic, and 81 of them (51%) were managed entirely per protocol. Twenty (25%) were D/C'ed from the emergency department after (-) LWE, and 11 (14%) were taken to the operating room (OR) for LAP when their clinical condition changed. Two (2%) of the protocol group underwent NONTHER LAP, and no patient experienced morbidity or mortality related to delay in treatment. Seventy-nine (49%) patients had deviations from protocol. There were 47 CT scans, 11 DPLs, and 9 laparoscopic explorations performed. In addition to the laparoscopic procedures, 38 (48%) patients were taken to the OR based on test results rather than a change in the patient's clinical

  10. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  11. Of Atkins and men: deviations from clinical definitions of mental retardation in death penalty cases.

    Science.gov (United States)

    Blume, John H; Johnson, Sheri Lynn; Seeds, Christopher

    2009-01-01

    Under Atkins v. Virginia, the Eighth Amendment exempts from execution individuals who meet the clinical definitions of mental retardation set forth by the American Association on Intellectual and Developmental Disabilities and the American Psychiatric Association. Both define mental retardation as significantly subaverage intellectual functioning accompanied by significant limitations in adaptive functioning, originating before the age of 18. Since Atkins, most jurisdictions have adopted definitions of mental retardation that conform to those definitions. But some states, looking often to stereotypes of persons with mental retardation, apply exclusion criteria that deviate from and are more restrictive than the accepted scientific and clinical definitions. These state deviations have the effect of excluding from Atkins's reach some individuals who plainly fall within the class it protects. This article focuses on the cases of Roger Cherry, Jeffrey Williams, Michael Stallings, and others, who represent an ever-growing number of individuals inappropriately excluded from Atkins. Left unaddressed, the state deviations discussed herein permit what Atkins does not: the death-sentencing and execution of some capital defendants who have mental retardation.

  12. Design for validation: An approach to systems validation

    Science.gov (United States)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  13. Evaluation of an electron Monte Carlo dose calculation algorithm for treatment planning.

    Science.gov (United States)

    Chamberland, Eve; Beaulieu, Luc; Lachance, Bernard

    2015-05-08

    showed a positive agreement with the measurements. The retrospective dosimetric comparison of a clinical case, which presented scatter perturbations by air cavities, showed a difference in dose of up to 20% between pencil beam and eMC algorithms. When comparing to the pencil beam algorithm, eMC calculations are definitely more accurate at predicting large dose perturbations due to inhomogeneities.

  14. An Emergency Department Validation of the SEP-3 Sepsis and Septic Shock Definitions and Comparison With 1992 Consensus Definitions.

    Science.gov (United States)

    Henning, Daniel J; Puskarich, Michael A; Self, Wesley H; Howell, Michael D; Donnino, Michael W; Yealy, Donald M; Jones, Alan E; Shapiro, Nathan I

    2017-10-01

    The Third International Consensus Definitions Task Force (SEP-3) proposed revised criteria defining sepsis and septic shock. We seek to evaluate the performance of the SEP-3 definitions for prediction of inhospital mortality in an emergency department (ED) population and compare the performance of the SEP-3 definitions to that of the previous definitions. This was a secondary analysis of 3 prospectively collected, observational cohorts of infected ED subjects aged 18 years or older. The primary outcome was all-cause inhospital mortality. In accordance with the SEP-3 definitions, we calculated test characteristics of sepsis (quick Sequential Organ Failure Assessment [qSOFA] score ≥2) and septic shock (vasopressor dependence plus lactate level >2.0 mmol/L) for mortality and compared them to the original 1992 consensus definitions. We identified 7,754 ED patients with suspected infection overall; 117 had no documented mental status evaluation, leaving 7,637 patients included in the analysis. The mortality rate for the overall population was 4.4% (95% confidence interval [CI] 3.9% to 4.9%). The mortality rate for patients with qSOFA score greater than or equal to 2 was 14.2% (95% CI 12.2% to 16.2%), with a sensitivity of 52% (95% CI 46% to 57%) and specificity of 86% (95% CI 85% to 87%) to predict mortality. The original systemic inflammatory response syndrome-based 1992 consensus sepsis definition had a 6.8% (95% CI 6.0% to 7.7%) mortality rate, sensitivity of 83% (95% CI 79% to 87%), and specificity of 50% (95% CI 49% to 51%). The SEP-3 septic shock mortality was 23% (95% CI 16% to 30%), with a sensitivity of 12% (95% CI 11% to 13%) and specificity of 98.4% (95% CI 98.1% to 98.7%). The original 1992 septic shock definition had a 22% (95% CI 17% to 27%) mortality rate, sensitivity of 23% (95% CI 18% to 28%), and specificity of 96.6% (95% CI 96.2% to 97.0%). Both the new SEP-3 and original sepsis definitions stratify ED patients at risk for mortality, albeit with

  15. THE ALGORITHM OF THE CASE FORMATION DURING THE DEVELOPMENT OF CLINICAL DISCIPLINES IN MEDICAL SCHOOL

    Directory of Open Access Journals (Sweden)

    Andrey A. Garanin

    2016-01-01

    Full Text Available The aim of the study is to develop the algorithm of formation of the case on discipline «Clinical Medicine». Methods. The methods involve the effectiveness analysis of the self-diagnosed levels of professional and personal abilities of students in the process of self-study. Results. The article deals with the organization of independent work of students of case-method, which is one of the most important and complex active learning methods. When implementing the method of case analysis in the educational process the main job of the teacher focused on the development of individual cases. While developing the case study of medical character the teacher needs to pay special attention to questions of pathogenesis and pathological anatomy for students’ formation of the fundamental clinical thinking allowing to estimate the patient’s condition as a complete organism, taking into account all its features, to understand the relationships of cause and effect arising at development of a concrete disease, to master new and to improve the available techniques of statement of the differential diagnosis. Scientific novelty and practical significance. The structure of a medical case study to be followed in the development of the case on discipline «Clinical Medicine» is proposed. Unification algorithm formation cases is necessary for the full implementation of the introduction in the educational process in the higher medical school as one of the most effective active ways of learning – method of case analysis, in accordance with the requirements that apply to higher professional education modern reforms and, in particular, the introduction of new Federal State Educational Standards. 

  16. A new automatic algorithm for quantification of myocardial infarction imaged by late gadolinium enhancement cardiovascular magnetic resonance: experimental validation and comparison to expert delineations in multi-center, multi-vendor patient data.

    Science.gov (United States)

    Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar

    2016-05-04

    Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in

  17. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  18. Emergency Department Management of Suspected Calf-Vein Deep Venous Thrombosis: A Diagnostic Algorithm

    Directory of Open Access Journals (Sweden)

    Levi Kitchen

    2016-06-01

    Full Text Available Introduction: Unilateral leg swelling with suspicion of deep venous thrombosis (DVT is a common emergency department (ED presentation. Proximal DVT (thrombus in the popliteal or femoral veins can usually be diagnosed and treated at the initial ED encounter. When proximal DVT has been ruled out, isolated calf-vein deep venous thrombosis (IC-DVT often remains a consideration. The current standard for the diagnosis of IC-DVT is whole-leg vascular duplex ultrasonography (WLUS, a test that is unavailable in many hospitals outside normal business hours. When WLUS is not available from the ED, recommendations for managing suspected IC-DVT vary. The objectives of the study is to use current evidence and recommendations to (1 propose a diagnostic algorithm for IC-DVT when definitive testing (WLUS is unavailable; and (2 summarize the controversy surrounding IC-DVT treatment. Discussion: The Figure combines D-dimer testing with serial CUS or a single deferred FLUS for the diagnosis of IC-DVT. Such an algorithm has the potential to safely direct the management of suspected IC-DVT when definitive testing is unavailable. Whether or not to treat diagnosed IC-DVT remains widely debated and awaiting further evidence. Conclusion: When IC-DVT is not ruled out in the ED, the suggested algorithm, although not prospectively validated by a controlled study, offers an approach to diagnosis that is consistent with current data and recommendations. When IC-DVT is diagnosed, current references suggest that a decision between anticoagulation and continued follow-up outpatient testing can be based on shared decision-making. The risks of proximal progression and life-threatening embolization should be balanced against the generally more benign natural history of such thrombi, and an individual patient’s risk factors for both thrombus propagation and complications of anticoagulation. [West J Emerg Med. 2016;17(4384-390.

  19. Emergency Department Management of Suspected Calf-Vein Deep Venous Thrombosis: A Diagnostic Algorithm.

    Science.gov (United States)

    Kitchen, Levi; Lawrence, Matthew; Speicher, Matthew; Frumkin, Kenneth

    2016-07-01

    Unilateral leg swelling with suspicion of deep venous thrombosis (DVT) is a common emergency department (ED) presentation. Proximal DVT (thrombus in the popliteal or femoral veins) can usually be diagnosed and treated at the initial ED encounter. When proximal DVT has been ruled out, isolated calf-vein deep venous thrombosis (IC-DVT) often remains a consideration. The current standard for the diagnosis of IC-DVT is whole-leg vascular duplex ultrasonography (WLUS), a test that is unavailable in many hospitals outside normal business hours. When WLUS is not available from the ED, recommendations for managing suspected IC-DVT vary. The objectives of the study is to use current evidence and recommendations to (1) propose a diagnostic algorithm for IC-DVT when definitive testing (WLUS) is unavailable; and (2) summarize the controversy surrounding IC-DVT treatment. The Figure combines D-dimer testing with serial CUS or a single deferred FLUS for the diagnosis of IC-DVT. Such an algorithm has the potential to safely direct the management of suspected IC-DVT when definitive testing is unavailable. Whether or not to treat diagnosed IC-DVT remains widely debated and awaiting further evidence. When IC-DVT is not ruled out in the ED, the suggested algorithm, although not prospectively validated by a controlled study, offers an approach to diagnosis that is consistent with current data and recommendations. When IC-DVT is diagnosed, current references suggest that a decision between anticoagulation and continued follow-up outpatient testing can be based on shared decision-making. The risks of proximal progression and life-threatening embolization should be balanced against the generally more benign natural history of such thrombi, and an individual patient's risk factors for both thrombus propagation and complications of anticoagulation.

  20. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    Science.gov (United States)

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning

    International Nuclear Information System (INIS)

    Li Yongjie; Yao Dezhong; Yao, Jonathan; Chen Wufan

    2005-01-01

    Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated

  2. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  3. Empirical validation of the S-Score algorithm in the analysis of gene expression data

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2006-03-01

    Full Text Available Abstract Background Current methods of analyzing Affymetrix GeneChip® microarray data require the estimation of probe set expression summaries, followed by application of statistical tests to determine which genes are differentially expressed. The S-Score algorithm described by Zhang and colleagues is an alternative method that allows tests of hypotheses directly from probe level data. It is based on an error model in which the detected signal is proportional to the probe pair signal for highly expressed genes, but approaches a background level (rather than 0 for genes with low levels of expression. This model is used to calculate relative change in probe pair intensities that converts probe signals into multiple measurements with equalized errors, which are summed over a probe set to form the S-Score. Assuming no expression differences between chips, the S-Score follows a standard normal distribution, allowing direct tests of hypotheses to be made. Using spike-in and dilution datasets, we validated the S-Score method against comparisons of gene expression utilizing the more recently developed methods RMA, dChip, and MAS5. Results The S-score showed excellent sensitivity and specificity in detecting low-level gene expression changes. Rank ordering of S-Score values more accurately reflected known fold-change values compared to other algorithms. Conclusion The S-score method, utilizing probe level data directly, offers significant advantages over comparisons using only probe set expression summaries.

  4. Recursive definition of global cellular-automata mappings

    International Nuclear Information System (INIS)

    Feldberg, R.; Knudsen, C.; Rasmussen, S.

    1994-01-01

    A method for a recursive definition of global cellular-automata mappings is presented. The method is based on a graphical representation of global cellular-automata mappings. For a given cellular-automaton rule the recursive algorithm defines the change of the global cellular-automaton mapping as the number of lattice sites is incremented. A proof of lattice size invariance of global cellular-automata mappings is derived from an approximation to the exact recursive definition. The recursive definitions are applied to calculate the fractal dimension of the set of reachable states and of the set of fixed points of cellular automata on an infinite lattice

  5. HWDA: A coherence recognition and resolution algorithm for hybrid web data aggregation

    Science.gov (United States)

    Guo, Shuhang; Wang, Jian; Wang, Tong

    2017-09-01

    Aiming at the object confliction recognition and resolution problem for hybrid distributed data stream aggregation, a distributed data stream object coherence solution technology is proposed. Firstly, the framework was defined for the object coherence conflict recognition and resolution, named HWDA. Secondly, an object coherence recognition technology was proposed based on formal language description logic and hierarchical dependency relationship between logic rules. Thirdly, a conflict traversal recognition algorithm was proposed based on the defined dependency graph. Next, the conflict resolution technology was prompted based on resolution pattern matching including the definition of the three types of conflict, conflict resolution matching pattern and arbitration resolution method. At last, the experiment use two kinds of web test data sets to validate the effect of application utilizing the conflict recognition and resolution technology of HWDA.

  6. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  7. In situ measurements and satellite remote sensing of case 2 waters: first results from the Curonian Lagoon

    Directory of Open Access Journals (Sweden)

    Claudia Giardino

    2010-06-01

    Full Text Available In this study we present calibration/validation activities associated with satellite MERIS image processing and aimed at estimatingchl a and CDOM in the Curonian Lagoon. Field data were used to validate the performances of two atmospheric correction algorithms,to build a band-ratio algorithm for chl a and to validate MERIS-derived maps. The neural network-based Case 2 Regional processor wasfound suitable for mapping CDOM; for chl a the band-ratio algorithm applied to image data corrected with the 6S code was found moreappropriate. Maps were in agreement with in situ measurements.This study confirmed the importance of atmospheric correction to estimate water quality and demonstrated the usefulness ofMERIS in investigating eutrophic aquatic ecosystems.

  8. Development and validation of a novel algorithm based on the ECG magnet response for rapid identification of any unknown pacemaker.

    Science.gov (United States)

    Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume

    2014-08-01

    Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  9. Development and clinical implementation of an enhanced display algorithm for use in networked electronic portal imaging

    International Nuclear Information System (INIS)

    Heuvel, Frank van den; Han, Ihn; Chungbin, Suzanne; Strowbridge, Amy; Tekyi-Mensah, Sam; Ragan, Don P.

    1999-01-01

    Purpose: To introduce and clinically validate a preprocessing algorithm that allows clinical images from an electronic portal imaging device (EPID) to be displayed on any computer monitor, without loss of clinical usability. The introduction of such a system frees EPI systems from the constraints of fixed viewing workstations and increases mobility of the images in a department. Methods and Materials: The preprocessing algorithm, together with its variable parameters is introduced. Clinically, the algorithm is tested using an observer study of 316 EPID images of the pelvic region in the framework of treatment of carcinoma of the cervix and endometrium. Both anterior-posterior (AP/PA) and latero-lateral (LAT) images were used. The images scored were taken from six different patients, five of whom were obese, female, and postmenopausal. The result is tentatively compared with results from other groups. The scoring system, based on the number of visible landmarks in the port, is proposed and validated. Validation was performed by having the observer panel score images with artificially induced noise levels. A comparative study was undertaken with a standard automatic window and leveling display technique. Finally, some case studies using different image sites and EPI detectors are presented. Results: The image quality for all images in this study was deemed to be clinically useful (mean score > 1). Most of the images received a score which was second highest (AP/PA landmarks ≥ 6 and LAT landmarks ≥ 5). Obesity, which has been an important factor determining the image quality, was not seen to be a factor here. Compared to standard techniques a highly significant improvement was determined with regard to clinical usefulness. The algorithm performs fast (less than 9 seconds) and needs no additional user interaction in most of the cases. The algorithm works well on both direct detection portal imagers and camera-based imagers whether analog or digital cameras

  10. Traumatic subarachnoid pleural fistula in children: case report, algorithm and classification proposal

    Directory of Open Access Journals (Sweden)

    Moscote-Salazar Luis Rafael

    2016-06-01

    Full Text Available Subarachnoid pleural fistulas are rare. They have been described as complications of thoracic surgery, penetrating injuries and spinal surgery, among others. We present the case of a 3-year-old female child, who suffer spinal cord trauma secondary to a car accident, developing a posterior subarachnoid pleural fistula. To our knowledge this is the first reported case of a pediatric patient with subarachnoid pleural fistula resulting from closed trauma, requiring intensive multimodal management. We also present a management algorithm and a proposed classification. The diagnosis of this pathology is difficult when not associated with neurological deficit. A high degree of suspicion, multidisciplinary management and timely surgical intervention allow optimal management.

  11. SU-E-T-347: Validation of the Condensed History Algorithm of Geant4 Using the Fano Test

    International Nuclear Information System (INIS)

    Lee, H; Mathis, M; Sawakuchi, G

    2014-01-01

    Purpose: To validate the condensed history algorithm and physics of the Geant4 Monte Carlo toolkit for simulations of ionization chambers (ICs). This study is the first step to validate Geant4 for calculations of photon beam quality correction factors under the presence of a strong magnetic field for magnetic resonance guided linac system applications. Methods: The electron transport and boundary crossing algorithms of Geant4 version 9.6.p02 were tested under Fano conditions using the Geant4 example/application FanoCavity. User-defined parameters of the condensed history and multiple scattering algorithms were investigated under Fano test conditions for three scattering models (physics lists): G4UrbanMscModel95 (PhysListEmStandard-option3), G4GoudsmitSaundersonMsc (PhysListEmStandard-GS), and G4WentzelVIModel/G4CoulombScattering (PhysListEmStandard-WVI). Simulations were conducted using monoenergetic photon beams, ranging from 0.5 to 7 MeV and emphasizing energies from 0.8 to 3 MeV. Results: The GS and WVI physics lists provided consistent Fano test results (within ±0.5%) for maximum step sizes under 0.01 mm at 1.25 MeV, with improved performance at 3 MeV (within ±0.25%). The option3 physics list provided consistent Fano test results (within ±0.5%) for maximum step sizes above 1 mm. Optimal parameters for the option3 physics list were 10 km maximum step size with default values for other user-defined parameters: 0.2 dRoverRange, 0.01 mm final range, 0.04 range factor, 2.5 geometrical factor, and 1 skin. Simulations using the option3 physics list were ∼70 – 100 times faster compared to GS and WVI under optimal parameters. Conclusion: This work indicated that the option3 physics list passes the Fano test within ±0.5% when using a maximum step size of 10 km for energies suitable for IC calculations in a 6 MV spectrum without extensive computational times. Optimal user-defined parameters using the option3 physics list will be used in future IC simulations to

  12. A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains

    Science.gov (United States)

    Gan, Tingyue; Cameron, Maria

    2017-06-01

    Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.

  13. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  14. Can the Cancer-related Fatigue Case-definition Criteria Be Applied to Chronic Medical Illness? A Comparison between Breast Cancer and Systemic Sclerosis.

    Science.gov (United States)

    Kwakkenbos, Linda; Minton, Ollie; Stone, Patrick C; Alexander, Susanna; Baron, Murray; Hudson, Marie; Thombs, Brett D

    2015-07-01

    Fatigue is a crucial determinant of quality of life across rheumatic diseases, but the lack of agreed-upon standards for identifying clinically significant fatigue hinders research and clinical management. Case definition criteria for cancer-related fatigue were proposed for inclusion in the International Classification of Diseases. The objective was to evaluate whether the cancer-related fatigue case definition performed equivalently in women with breast cancer and systemic sclerosis (SSc) and could be used to identify patients with chronic illness-related fatigue. The cancer-related fatigue interview (case definition criteria met if ≥ 5 of 9 fatigue-related symptoms present with functional impairment) was completed by 291 women with SSc and 278 women successfully treated for breast cancer. Differential item functioning was assessed with the multiple indicator multiple cause model. Items 3 (concentration) and 10 (short-term memory) were endorsed significantly less often by women with SSc compared with cancer, controlling for responses on other items. Omitting these 2 items from the case definition and requiring 4 out of the 7 remaining symptoms resulted in a similar overall prevalence of cancer-related fatigue in the cancer sample compared with the original criteria (37.4% vs 37.8%, respectively), with 97.5% of patients diagnosed identically with both definitions. Prevalence of chronic illness-related fatigue was 36.1% in SSc using 4 of 7 symptoms. The cancer-related fatigue criteria can be used equivalently to identify patients with chronic illness-related fatigue when 2 cognitive fatigue symptoms are omitted. Harmonized definitions and measurement of clinically significant fatigue will advance research and clinical management of fatigue in rheumatic diseases and other conditions.

  15. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  16. Engineering Definitional Interpreters

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Ramsay, Norman; Larsen, Bradford

    2013-01-01

    A definitional interpreter should be clear and easy to write, but it may run 4--10 times slower than a well-crafted bytecode interpreter. In a case study focused on implementation choices, we explore ways of making definitional interpreters faster without expending much programming effort. We imp...

  17. A controllable sensor management algorithm capable of learning

    Science.gov (United States)

    Osadciw, Lisa A.; Veeramacheneni, Kalyan K.

    2005-03-01

    Sensor management technology progress is challenged by the geographic space it spans, the heterogeneity of the sensors, and the real-time timeframes within which plans controlling the assets are executed. This paper presents a new sensor management paradigm and demonstrates its application in a sensor management algorithm designed for a biometric access control system. This approach consists of an artificial intelligence (AI) algorithm focused on uncertainty measures, which makes the high level decisions to reduce uncertainties and interfaces with the user, integrated cohesively with a bottom up evolutionary algorithm, which optimizes the sensor network"s operation as determined by the AI algorithm. The sensor management algorithm presented is composed of a Bayesian network, the AI algorithm component, and a swarm optimization algorithm, the evolutionary algorithm. Thus, the algorithm can change its own performance goals in real-time and will modify its own decisions based on observed measures within the sensor network. The definition of the measures as well as the Bayesian network determine the robustness of the algorithm and its utility in reacting dynamically to changes in the global system.

  18. Improved algorithm for quantum separability and entanglement detection

    International Nuclear Information System (INIS)

    Ioannou, L.M.; Ekert, A.K.; Travaglione, B.C.; Cheung, D.

    2004-01-01

    Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. It has recently been shown that this problem is NP-hard, suggesting that an efficient, general solution does not exist. There is a highly inefficient 'basic algorithm' for solving the quantum separability problem which follows from the definition of a separable state. By exploiting specific properties of the set of separable states, we introduce a classical algorithm that solves the problem significantly faster than the 'basic algorithm', allowing a feasible separability test where none previously existed, e.g., in 3x3-dimensional systems. Our algorithm also provides a unique tool in the experimental detection of entanglement

  19. Exploring Stakeholder Definitions within the Aerospace Industry: A Qualitative Case Study

    Science.gov (United States)

    Hebert, Jonathan R.

    A best practice in the discipline of project management is to identify all key project stakeholders prior to the execution of a project. When stakeholders are properly identified, they can be consulted to provide expert advice on project activities so that the project manager can ensure the project stays within the budget and schedule constraints. The problem addressed by this study is that managers fail to properly identify key project stakeholders when using stakeholder theory because there are multiple conflicting definitions for the term stakeholder. Poor stakeholder identification has been linked to multiple negative project outcomes such as budget and schedules overruns, and this problem is heightened in certain industries such as aerospace. The purpose of this qualitative study was to explore project managers' and project stakeholders' perceptions of how they define and use the term stakeholder within the aerospace industry. This qualitative exploratory single-case study had two embedded units of analysis: project managers and project stakeholders. Six aerospace project managers and five aerospace project stakeholders were purposively selected for this study. Data were collected through individual semi-structured interviews with both project managers and project stakeholders. All data were analyzed using Yin's (2011) five-phased cycle approach for qualitative research. The results indicated that the aerospace project managers and project stakeholder define the term stakeholder as "those who do the work of a company." The participants build upon this well-known concept by adding that, "a company should list specific job titles" that correspond to their company specific-stakeholder definition. Results also indicated that the definition of the term stakeholder is used when management is assigning human resources to a project to mitigate or control project risk. Results showed that project managers tended to include the customer in their stakeholder definitions

  20. A high-performance spatial database based approach for pathology imaging algorithm evaluation.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A D; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J; Saltz, Joel H

    2013-01-01

    Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and