WorldWideScience

Sample records for accuracy relevance timeliness

  1. Insider Trading B-side: relevance, timeliness and position influence

    Directory of Open Access Journals (Sweden)

    Luiz Felipe de A. Pontes Girão

    2015-12-01

    Full Text Available Objective – Our main objective is to analyze the impacto f insider trading on stock investments’ decision. Design/methodology/approach – We used an online survey, obtaining 271 valid answers. To analyze our data, we used some parametric (t and F Anova, and non-parametric techniques (Mann-Whitney and Kruskal-Wallis. Findings – We find that insider tradings are relevant to investment decisions, and the timeliness also exert an influence to this kind of decision, especially abnormal trades. Practical implications – In practical terms, our results suggests that the Brazilian Securities and Exchange Commission (CVM must update the Brazilian insider trading regulation to achieve the objective to protect investors. In the investors point of view, this possible update could improve investors’ ability to control insiders and follow his activities as well as to mimic his trades. Originality/value – The originality of our paper is an analysis of relevance, timeliness and influence of position in a firm as “determinants” of investment decisions. We use these three specific characteristics to criticize the Brazilian insider trading regulation.

  2. Whether Audit Committee Financial Expertise Is the Only Relevant Expertise: A Review of Audit Committee Expertise and Timeliness of Financial Reporting

    Directory of Open Access Journals (Sweden)

    Saeed Rabea Baatwah

    2013-06-01

    Full Text Available This study reviews the literature on audit committee expertise and financial reporting timeliness. Financial reporting timeliness and audit committee expertise are two areas of research gaining the attention of a large number of stakeholders because they contribute to the reliability and the  relevancy of financial reporting. Indeed, the focus of this review is primarily on the recent developments in the pertinent literature in order to show the limitations of such research and encourage future research to overcome these limitations. By also looking at the development of the audit committee expertise literature, this study concludes that (1 like most audit committee literature, financial reporting timeliness literature continues to assume the absence of the contribution of expertise other than financial expertise, and ignore the role of audit committee chair; (2 most of this literature fails to find a significant effect because it ignores the interaction among corporate governance mechanisms. Accordingly, this study posits that ignoring the issues raised in such research by future research would lead to major mistakes in reforms relating to how the quality of financial reporting can be enhanced.

  3. Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS.

    Science.gov (United States)

    Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad

    2017-04-01

    Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. IFRS Adoption, Firm Traits and Audit Timeliness: Evidence from Nigeria

    Directory of Open Access Journals (Sweden)

    Musa Inuwa Fodio

    2015-06-01

    Full Text Available Audit timeliness is an important ingredient of quality financial reporting. Stale information might only benefit little to stakeholders in their decision making process. With the recent adoption of the International Financial Reporting Standards in Nigeria, the work of the auditor has seemingly become complicated. The question then emerges, if such adoption affects the timeliness of audit reports. This study empirically investigates the impact of IFRS adoption and other associated explanatory variables on audit timeliness in Nigerian deposit money banks for the period 2010 to 2013. Panel regression analysis reveals a positive significant impact of IFRS adoption on audit timeliness. Results also indicate that firm age, firm size and auditor firm type are significant predictors of audit timeliness in Nigeria deposit money banks. The study recommends that auditor firms should make stringent efforts to acclimatize with the complexities of the IFRS transition process so as to reduce audit report delays. Also reporting agencies should come up with regulations, deadlines and benchmarks for issuance of independent audit reports.

  5. Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.

    Science.gov (United States)

    Ialongo, Cristiano; Bernardini, Sergio

    2016-01-01

    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.

  6. Hans Joas & Daniel R. Huebner (eds.), The Timeliness of George Herbert Mead

    OpenAIRE

    Baggio, Guido

    2018-01-01

    The Timeliness of George Herbert Mead is a significant contribution to the recent “Mead renaissance.” It gathers some contributions first presented at the conference celebrating the 150th anniversary of the birth of George Herbert Mead held in April 2013 at the University of Chicago and organized by Hans Joas, Andrew Abbott, Daniel Huebner, and Christopher Takacs. The volume brings scholarship on G. H. Mead up to date highlighting Mead’s relevance for areas of research completely ignored by p...

  7. Pengaruh Faktor Internal dan Eksternal Perusahaan Terhadap Audit Delay dan Timeliness

    Directory of Open Access Journals (Sweden)

    Sistya Rachmawati

    2008-01-01

    Full Text Available The objective of this research is to investigate the influence of the firm size, the profitability, the solvability, the public accountant size and the existence of internal auditor division toward the Audit Delay and Timeliness on manufacture companies that listed in Jakarta Stock Exchange.The Research sample was taken from Fifty-nine listed companies in Jakarta Stock Exchange. These samples were selected by using Purposive sampling method. Analysis hypothesis is using Multiple Regression, before hypothesis test, normality data test using P-Plot test.The result of Multiple Regression model shows that Audit Delay influenced by firm size and public accountant size, and Timeliness influenced by firm size and solvability. This result is recommended for auditor to increase effectiveness and efficiency of his audit performance and for all existing studies to contribute towards the current literature on Auditing. Abstract in Bahasa Indonesia: Penelitian ini bertujuan untuk mengukur pengaruh faktor internal yaitu: profitabilitas, solva¬bili¬tas, internal auditor dan size perusahaan dan faktor eksternal, yaitu ukuran KAP terhadap audit delay dan Timeliness pada perusahaan manufaktur yang terdaftar pada Jakarta Stock Exchange. Pemilihan sampel menggunakan metode Purposive Sampling. Dari hasil pengolahan Regresi Berganda pada Audit Delay diketahui bahwa koefisien determi¬nasi Adjusted R2 = 0,123. Artinya seluruh variabel independen (Profitabilitas, Solvabilitas, Internal Auditor, Size Perusahaan, dan KAP hanya mampu menjelaskan variasi dari variabel depen¬den (Audit Delay adalah sebesar 12,3%. Sedang¬kan pada Timeliness, seluruh variabel independen (Profitabilitas, Solvabilitas, Internal Auditor, Size Perusahaan, dan KAP dapat men¬jelaskan variasi pada variabel dependennya (Timeliness adalah sebesar 7,9%. Hasil dari penelitian ini dapat membantu profesi akuntan publik dalam upaya meningkatkan efisiensi dan efektivitas proses audit dengan

  8. Relevance of intracellular polarity to accuracy of eukaryotic chemotaxis

    International Nuclear Information System (INIS)

    Hiraiwa, Tetsuya; Nishikawa, Masatoshi; Shibata, Tatsuo; Nagamatsu, Akihiro; Akuzawa, Naohiro

    2014-01-01

    Eukaryotic chemotaxis is usually mediated by intracellular signals that tend to localize at the front or back of the cell. Such intracellular polarities frequently require no extracellular guidance cues, indicating that spontaneous polarization occurs in the signal network. Spontaneous polarization activity is considered relevant to the persistent motions in random cell migrations and chemotaxis. In this study, we propose a theoretical model that connects spontaneous intracellular polarity and motile ability in a chemoattractant solution. We demonstrate that the intracellular polarity can enhance the accuracy of chemotaxis. Chemotactic accuracy should also depend on chemoattractant concentration through the concentration-dependent correlation time in the polarity direction. Both the polarity correlation time and the chemotactic accuracy depend on the degree of responsiveness to the chemical gradient. We show that optimally accurate chemotaxis occurs at an intermediate responsiveness of intracellular polarity. Experimentally, we find that the persistence time of randomly migrating Dictyostelium cells depends on the chemoattractant concentration, as predicted by our theory. At the optimum responsiveness, this ameboid cell can enhance its chemotactic accuracy tenfold. (paper)

  9. Vaccine Education During Pregnancy and Timeliness of Infant Immunization.

    Science.gov (United States)

    Veerasingam, Priya; Grant, Cameron C; Chelimo, Carol; Philipson, Kathryn; Gilchrist, Catherine A; Berry, Sarah; Carr, Polly Atatoa; Camargo, Carlos A; Morton, Susan

    2017-09-01

    Pregnant women routinely receive information in support of or opposing infant immunization. We aimed to describe immunization information sources of future mothers' and determine if receiving immunization information is associated with infant immunization timeliness. We analyzed data from a child cohort born 2009-2010 in New Zealand. Pregnant women ( N = 6822) at a median gestation of 39 weeks described sources of information encouraging or discouraging infant immunization. Immunizations received by cohort infants were determined through linkage with the National Immunization Register ( n = 6682 of 6853 [98%]). Independent associations of immunization information received with immunization timeliness were described by using adjusted odds ratios (ORs) and 95% confidence intervals (CIs). Immunization information sources were described by 6182 of 6822 (91%) women. Of these, 2416 (39%) received information encouraging immunization, 846 (14%) received discouraging information, and 565 (9%) received both encouraging and discouraging information. Compared with infants of women who received no immunization information (71% immunized on-time), infants of women who received discouraging information only (57% immunized on time, OR = 0.49, 95% CI 0.38-0.64) or encouraging and discouraging information (61% immunized on time, OR = 0.51, 95% CI 0.42-0.63) were at decreased odds of receiving all immunizations on time. Receipt of encouraging information only was not associated with infant immunization timeliness (73% immunized on time, OR = 1.00, 95% CI 0.87-1.15). Receipt, during pregnancy, of information against immunization was associated with delayed infant immunization regardless of receipt of information supporting immunization. In contrast, receipt of encouraging information is not associated with infant immunization timeliness. Copyright © 2017 by the American Academy of Pediatrics.

  10. Quantifying reporting timeliness to improve outbreak control

    NARCIS (Netherlands)

    Bonačić Marinović, Axel; Swaan, Corien; van Steenbergen, Jim; Kretzschmar, MEE

    The extent to which reporting delays should be reduced to gain substantial improvement in outbreak control is unclear. We developed a model to quantitatively assess reporting timeliness. Using reporting speed data for 6 infectious diseases in the notification system in the Netherlands, we calculated

  11. Timeliness of notification systems for infectious diseases: A systematic literature review.

    Science.gov (United States)

    Swaan, Corien; van den Broek, Anouk; Kretzschmar, Mirjam; Richardus, Jan Hendrik

    2018-01-01

    Timely notification of infectious diseases is crucial for prompt response by public health services. Adequate notification systems facilitate timely notification. A systematic literature review was performed to assess outcomes of studies on notification timeliness and to determine which aspects of notification systems are associated with timely notification. Articles reviewing timeliness of notifications published between 2000 and 2017 were searched in Pubmed and Scopus. Using a standardized notification chain, timeliness of reporting system for each article was defined as either sufficient (≥ 80% notifications in time), partly sufficient (≥ 50-80%), or insufficient (systems were compared with conventional methods (postal mail, fax, telephone, email) and mobile phone reporting. 48 articles were identified. In almost one third of the studies with a predefined timeframe (39), timeliness of notification systems was either sufficient or insufficient (11/39, 28% and 12/39, 31% resp.). Applying the standardized timeframe (45 studies) revealed similar outcomes (13/45, 29%, sufficient notification timeframe, vs 15/45, 33%, insufficient). The disease specific timeframe was not met by any study. Systems involving reporting by laboratories most often complied sufficiently with predefined or standardized timeframes. Outcomes were not related to electronic, conventional notification systems or mobile phone reporting. Electronic systems were faster in comparative studies (10/13); this hardly resulted in sufficient timeliness, neither according to predefined nor to standardized timeframes. A minority of notification systems meets either predefined, standardized or disease specific timeframes. Systems including laboratory reporting are associated with timely notification. Electronic systems reduce reporting delay, but implementation needs considerable effort to comply with notification timeframes. During outbreak threats, patient, doctors and laboratory testing delays need to

  12. Verification of Positional Accuracy of ZVS3003 Geodetic Control ...

    African Journals Online (AJOL)

    The International GPS Service (IGS) has provided GPS orbit products to the scientific community with increased precision and timeliness. Many users interested in geodetic positioning have adopted the IGS precise orbits to achieve centimeter level accuracy and ensure long-term reference frame stability. Positioning with ...

  13. TIMELINESS LAPORAN KEUANGAN DI INDONESIA (STUDI EMPIRIS TERHADAP EMITEN BURSA EFEK JAKARTA)

    OpenAIRE

    Michell Suharli; Sofyan S. Harahap

    2008-01-01

    This Research examines variables which are predicted influencing timeliness finandal statement in Indonesia. Factors that are predicted influencing timeliness in this research are focused on 4 factors: firm scale, profitability, big 4 worldwide accounting firm , and securities return . This research can examines financial statement of 30 companies are listed Jakarta Stock Exchange for period ended December31, 2002 until December31, 2003. Data is collected from Jakarta Stock Exchange and Indon...

  14. Timeliness Laporan Keuangan di Indonesia (Studi Empiris terhadap Emiten Bursa Efek Jakarta)

    OpenAIRE

    Suharli, Michell; Harahap, Sofyan S

    2008-01-01

    This Research examines variables which are predicted influencing timeliness finandal statement in Indonesia. Factors that are predicted influencing timeliness in this research are focused on 4 factors: firm scale, profitability, big 4 worldwide accounting firm , and securities return . This research can examines financial statement of 30 companies are listed Jakarta Stock Exchange for period ended December31, 2002 until December31, 2003. Data is collected from Jakarta Stock Exchange and Indon...

  15. Analysis of timeliness of infectious disease reporting in the Netherlands

    Directory of Open Access Journals (Sweden)

    Kretzschmar Mirjam EE

    2011-05-01

    Full Text Available Abstract Background Timely reporting of infectious disease cases to public health authorities is essential to effective public health response. To evaluate the timeliness of reporting to the Dutch Municipal Health Services (MHS, we used as quantitative measures the intervals between onset of symptoms and MHS notification, and between laboratory diagnosis and notification with regard to six notifiable diseases. Methods We retrieved reporting data from June 2003 to December 2008 from the Dutch national notification system for shigellosis, EHEC/STEC infection, typhoid fever, measles, meningococcal disease, and hepatitis A virus (HAV infection. For each disease, median intervals between date of onset and MHS notification were calculated and compared with the median incubation period. The median interval between date of laboratory diagnosis and MHS notification was similarly analysed. For the year 2008, we also investigated whether timeliness is improved by MHS agreements with physicians and laboratories that allow direct laboratory reporting. Finally, we investigated whether reports made by post, fax, or e-mail were more timely. Results The percentage of infectious diseases reported within one incubation period varied widely, between 0.4% for shigellosis and 90.3% for HAV infection. Not reported within two incubation periods were 97.1% of shigellosis cases, 76.2% of cases of EHEC/STEC infection, 13.3% of meningococcosis cases, 15.7% of measles cases, and 29.7% of typhoid fever cases. A substantial percentage of infectious disease cases was reported more than three days after laboratory diagnosis, varying between 12% for meningococcosis and 42% for shigellosis. MHS which had agreements with physicians and laboratories showed a significantly shorter notification time compared to MHS without such agreements. Conclusions Over the study period, many cases of the six notifiable diseases were not reported within two incubation periods, and many were

  16. 29 CFR 1611.14 - Exemptions-Office of Inspector General Files.

    Science.gov (United States)

    2010-07-01

    ... determine relevance or necessity of information in the early stages of an investigation. The value of such... its investigations attempting to resolve questions of accuracy, relevance, timeliness and completeness. (4) From subsection (e)(1), because it is often impossible to determine relevance or necessity of...

  17. GPS and Electronic Fence Data Fusion for Positioning within Railway Worksite Scenarios

    DEFF Research Database (Denmark)

    Figueiras, Joao; Grønbæk, Lars Jesper; Ceccarelli, Andrea

    2012-01-01

    Context-dependent decisions in safety-critical applications require careful consideration of accuracy and timeliness of the underlying context information. Relevant examples include location-dependent actions in mobile distributed systems. This paper considers localization functions for personali......Context-dependent decisions in safety-critical applications require careful consideration of accuracy and timeliness of the underlying context information. Relevant examples include location-dependent actions in mobile distributed systems. This paper considers localization functions...... with information from the electronic fences is developed and analyzed. Different accuracy metrics are proposed and the benefit obtained from the fusion with electronic fences is quantitatively analyzed in the scenarios of a single mobile entity: By having fence information, the correct zone estimation can increase...... by 30%, while false alarms can be reduced one order of magnitude in the tested scenario....

  18. Vaccination coverage and timeliness in three South African areas: a prospective study

    Directory of Open Access Journals (Sweden)

    Sanders David

    2011-05-01

    Full Text Available Abstract Background Timely vaccination is important to induce adequate protective immunity. We measured vaccination timeliness and vaccination coverage in three geographical areas in South Africa. Methods This study used vaccination information from a community-based cluster-randomized trial promoting exclusive breastfeeding in three South African sites (Paarl in the Western Cape Province, and Umlazi and Rietvlei in KwaZulu-Natal between 2006 and 2008. Five interview visits were carried out between birth and up to 2 years of age (median follow-up time 18 months, and 1137 children were included in the analysis. We used Kaplan-Meier time-to-event analysis to describe vaccination coverage and timeliness in line with the Expanded Program on Immunization for the first eight vaccines. This included Bacillus Calmette-Guérin (BCG, four oral polio vaccines and 3 doses of the pentavalent vaccine which protects against diphtheria, pertussis, tetanus, hepatitis B and Haemophilus influenzae type B. Results The proportion receiving all these eight recommended vaccines were 94% in Paarl (95% confidence interval [CI] 91-96, 62% in Rietvlei (95%CI 54-68 and 88% in Umlazi (95%CI 84-91. Slightly fewer children received all vaccines within the recommended time periods. The situation was worst for the last pentavalent- and oral polio vaccines. The hazard ratio for incomplete vaccination was 7.2 (95%CI 4.7-11 for Rietvlei compared to Paarl. Conclusions There were large differences between the different South African sites in terms of vaccination coverage and timeliness, with the poorer areas of Rietvlei performing worse than the better-off areas in Paarl. The vaccination coverage was lower for the vaccines given at an older age. There is a need for continued efforts to improve vaccination coverage and timeliness, in particular in rural areas. Trial registration number ClinicalTrials.gov: NCT00397150

  19. Socio-economic determinants and inequities in coverage and timeliness of early childhood immunisation in rural Ghana.

    Science.gov (United States)

    Gram, Lu; Soremekun, Seyi; ten Asbroek, Augustinus; Manu, Alexander; O'Leary, Maureen; Hill, Zelee; Danso, Samuel; Amenga-Etego, Seeba; Owusu-Agyei, Seth; Kirkwood, Betty R

    2014-07-01

    To assess the extent of socio-economic inequity in coverage and timeliness of key childhood immunisations in Ghana. Secondary analysis of vaccination card data collected from babies born between January 2008 and January 2010 who were registered in the surveillance system supporting the ObaapaVita and Newhints Trials was carried out. 20 251 babies had 6 weeks' follow-up, 16 652 had 26 weeks' follow-up, and 5568 had 1 year's follow-up. We performed a descriptive analysis of coverage and timeliness of vaccinations by indicators for urban/rural status, wealth and educational attainment. The association of coverage with socio-economic indicators was tested using a chi-square-test and the association with timeliness using Cox regression. Overall coverage at 1 year of age was high (>95%) for Bacillus Calmette-Guérin (BCG), all three pentavalent diphtheria-pertussis-tetanus-haemophilus influenzae B-hepatitis B (DPTHH) doses and all polio doses except polio at birth (63%). Coverage against measles and yellow fever was 85%. Median delay for BCG was 1.7 weeks. For polio at birth, the median delay was 5 days; all other vaccine doses had median delays of 2-4 weeks. We found substantial health inequity across all socio-economic indicators for all vaccines in terms of timeliness, but not coverage at 1 year. For example, for the last DPTHH dose, the proportion of children delayed more than 8 weeks were 27% for urban children and 31% for rural children (P < 0.001), 21% in the wealthiest quintile and 41% in the poorest quintile (P < 0.001), and 9% in the most educated group and 39% in the least educated group (P < 0.001). However, 1-year coverage of the same dose remained above 90% for all levels of all socio-economic indicators. Ghana has substantial health inequity across urban/rural, socio-economic and educational divides. While overall coverage was high, most vaccines suffered from poor timeliness. We suggest that countries achieving high coverage should include timeliness

  20. Effect of the Adoption of IFRS on the Information Relevance of Accounting Profits in Brazil

    Directory of Open Access Journals (Sweden)

    Mateus Alexandre Costa dos Santos

    2014-12-01

    Full Text Available This study aimed to assess the effect of adopting the International Financial Reporting Standards (IFRS in Brazil on the information relevance of accounting profits of publicly traded companies. International studies have shown that the adoption of IFRS improves the quality of accounting information compared with domestic accounting standards. Concurrent evidence is sparse in Brazil. Information relevance is understood herein as a multidimensional attribute that is closely related to the quality and usefulness of the information conveyed by accounting profits. The associative capacity and information timeliness of accounting profits in relation to share prices were examined. Furthermore, the level of conditional conservatism present in accounting profits was also analyzed because according to Basu (1997, this aspect is related to timeliness. The study used pooled regressions and panel data models to analyze the quarterly accounting profits of 246 companies between the first quarter of 1999 and the first quarter of 2013, resulting in 9,558 quarter-company observations. The results indicated that the adoption of IFRS in Brazil (1 increased the associative capacity of accounting profits; (2 reduced information timeliness to non-significant levels; and (3 had no effect on conditional conservatism. The joint analysis of the empirical evidence from the present study conclusively precludes stating that the adoption of IFRS in Brazil contributed to an increase the information relevance of accounting profits of publicly traded companies.

  1. Timeliness of notification systems for infectious diseases: A systematic literature review.

    NARCIS (Netherlands)

    Swaan, Corien; van den Broek, Anouk; Kretzschmar, Mirjam; Richardus, Jan Hendrik

    2018-01-01

    Timely notification of infectious diseases is crucial for prompt response by public health services. Adequate notification systems facilitate timely notification. A systematic literature review was performed to assess outcomes of studies on notification timeliness and to determine which aspects of

  2. Clinical relevance of studies on the accuracy of visual inspection for detecting caries lesions

    DEFF Research Database (Denmark)

    Gimenez, Thais; Piovesan, Chaiana; Braga, Mariana M

    2015-01-01

    Although visual inspection is the most commonly used method for caries detection, and consequently the most investigated, studies have not been concerned about the clinical relevance of this procedure. Therefore, we conducted a systematic review in order to perform a critical evaluation considering...... the clinical relevance and methodological quality of studies on the accuracy of visual inspection for assessing caries lesions. Two independent reviewers searched several databases through July 2013 to identify papers/articles published in English. Other sources were checked to identify unpublished literature...... to clinical relevance and the methodological quality of the studies were evaluated. 96 of the 5,578 articles initially identified met the inclusion criteria. In general, most studies failed in considering some clinically relevant aspects: only 1 included study validated activity status of lesions, no study...

  3. 24 CFR 2003.8 - General exemptions.

    Science.gov (United States)

    2010-04-01

    ... its investigations attempting to resolve questions of accuracy, relevance, timeliness and completeness. (4) From subsection (e)(1), because it is often impossible to determine relevance or necessity of information in the early stages of an investigation. The value of such information is a question of judgment...

  4. 24 CFR 2003.9 - Specific exemptions.

    Science.gov (United States)

    2010-04-01

    ... attempting to resolve questions of accuracy, relevance, timeliness and completeness. (4) From subsection (e)(1), because it is often impossible to determine relevance or necessity of information in the early stages of an investigation. The value of such information is a question of judgment and timing; what...

  5. 24 CFR 16.15 - Specific exemptions.

    Science.gov (United States)

    2010-04-01

    ... eligibility attempting to resolve questions of accuracy, relevance, timeliness and completeness. (4) From subsection (e)(1) because it is often impossible to determine relevance or necessity of information in pre-investigative early stages. The value of such information is a question of judgment and timing; what appears...

  6. Providing Mailing Cost Reimbursements: The Effect on Reporting Timeliness of Sexually Transmitted Diseases in Virginia.

    Science.gov (United States)

    Vasiliu, Oana E; Stover, Jeffrey A; Mays, Marissa J E; Bissette, Jennifer M; Dolan, Carrie B; Sirbu, Corina M

    2009-01-01

    We investigated the effect of providing mailing cost reimbursements to local health departments on the timeliness of the reporting of sexually transmitted diseases (STDs) in Virginia. The Division of Disease Prevention, Virginia Department of Health, provided mailing cost reimbursements to 31 Virginia health districts from October 2002 to December 2004. The difference (in days) between the diagnosis date (or date the STD paperwork was initiated) and the date the case/STD report was entered into the STD surveillance database was used in a negative binomial regression model against time (as divided into three periods-before, during, and after reimbursement) to estimate the effect of providing mailing cost reimbursements on reporting timeliness. We observed significant decreases in the number of days between diagnosis and reporting of a case, which were sustained after the reimbursement period ended, in 25 of the 31 health districts included in the analysis. We observed a significant initial decrease (during the reimbursement period) followed by a significant increase in the after-reimbursement phase in one health district. Two health districts had a significant initial decrease, while one health district had a significant decrease in reporting timeliness in the period after reimbursement. Two health districts showed no significant changes in the number of days to report to the central office. Providing reimbursements for mailing costs was statistically associated with improved STD reporting timeliness in almost all of Virginia's health districts. Sustained improvement after the reimbursement period ended is likely indicative of improved local health department reporting habits.

  7. The effect of fair disclosure regulation on timeliness and informativeness of earnings announcements

    Directory of Open Access Journals (Sweden)

    Yeonhee Park

    2013-03-01

    Full Text Available This paper examines the effect of Korea’s fair disclosure regulation on the timeliness and informativeness of earnings announcements. The present regulation for Korean listed firms requires that if a company’s sales revenue, operating income (or loss and net income (or loss have changed by over 30% compared to the prior year, the firm must disclose this information through a preliminary financial report (PFR even before the company is audited by external auditors. To analyze the effects of this policy, we first investigate the timeliness of preliminary financial report disclosures. We examine the extent to which Korean listed companies actually comply with the requirement for prompt notification of information concerning material changes in financial performance. Second, we investigate the informativeness of preliminary financial reports by analyzing differential stock market reactions to different timings of preliminary financial report disclosures. Our empirical results reveal that more than half of our sample firms release their preliminary financial reports after external audits are completed, thereby potentially invalidating the effectiveness of the regulation. In addition, we find that preliminary financial reports have information value only if they are disclosed prior to annual audit report dates. This finding supports the notion that timeliness increases the informativeness of preliminary financial report disclosure by curbing insiders’ ability to potentially profit from their information advantage.

  8. Timeliness of earnings reported by Romanian listed companies

    Directory of Open Access Journals (Sweden)

    Mihai Carp

    2018-03-01

    Full Text Available The paper aims to analyze the quality of financial information, by assessing the timeliness of earnings, using information specific to non-financial companies listed on the regulated section of Bucharest Stock Exchange. The study also seeks to assess the symmetry of actions for the timely recognition of potential gains and losses (components of the economic income and, if there is an asymmetry, to identify the sense of the temporary gap. The phenomenon is analysed in conjunction with a number of control factors such as the Romanian Accounting Standards (RAS, the International Financial Reporting Standards (IFRS, the degree of indebtedness or the entities’ field of activity. Quantitative analysis performed through econometric models consecrated in the field, such as Basu (1997 and Ball and Shivakumar (2005, reveals that the companies included in the study provide financial information that meets the qualitatively criterion assessed, respectively earnings timeliness. Deepening the analysis has made it possible to identify a timely recognition both for unrealised gains and potential losses, as a result of tests carried out on the whole sample, an advance in what concerns the inclusion of economic lose in the accounting income compared to the recognition of economic gains. The presence of disjunctive factors in the analysis generated a number of particular results. In the case of normally indebted companies that apply the IFRS, a timely recognition of economic gains and losses was noted, without the gap specific to conservatism.

  9. The Relevance of Interoception in Chronic Tinnitus: Analyzing Interoceptive Sensibility and Accuracy

    Directory of Open Access Journals (Sweden)

    Pia Lau

    2015-01-01

    Full Text Available In order to better understand tinnitus and distress associated with tinnitus, psychological variables such as emotional and cognitive processing are a central element in theoretical models of this debilitating condition. Interoception, that is, the perception of internal processes, may be such a psychological factor relevant to tinnitus. Against this background, 20 participants suffering from chronic tinnitus and 20 matched healthy controls were tested with questionnaires, assessing interoceptive sensibility, and participated in two tasks, assessing interoceptive accuracy: the Schandry task, a heartbeat estimation assignment, and a skin conductance fluctuations perception task assessing the participants’ ability to perceive phasic increases in sympathetic activation were used. To test stress reactivity, a construct tightly connected to tinnitus onset, we also included a stress induction. No differences between the groups were found for interoceptive accuracy and sensibility. However, the tinnitus group tended to overestimate the occurrence of phasic activation. Loudness of the tinnitus was associated with reduced interoceptive performance under stress. Our results indicate that interoceptive sensibility and accuracy do not play a significant role in tinnitus. However, tinnitus might be associated with a tendency to overestimate physical changes.

  10. Application of EMCAS timeliness model to the safeguards/facility interface

    International Nuclear Information System (INIS)

    Eggers, R.F.; Giese, E.W.

    1987-01-01

    The Hanford operating contractor has developed a timeliness model for periodic mass balance tests (MBTs) for loss of special nuclear material (SNM). The purpose of the model is to compute the probability that an adversary will be detected by a periodic MBT before he could escape from a facility with stolen SNM using stealth and deceit to avoid detection. The model considers (a) the loss detection sensitivity of the MBT, (b) the time between MBTs, and (c) the statistical distribution of the total time required to complete stealth and deceit strategies. The model shows whether or not it is cost-effective to conduct frequent MBTs for loss and where improvements should be made. The Evaluation Methods for Material Control and Accountability Safeguards Systems (EMCAS) timeliness model computes the loss detection capability of periodic materials control and accounting (MC ampersand A) tests in terms of (a) the ability of the test to detect the specified target quantity and (b) the probability that the MC ampersand A test will occur before the adversary can complete the sequence of stealth and deceit strategies required to avoid detection

  11. A Smartphone-Based Application Improves the Accuracy, Completeness, and Timeliness of Cattle Disease Reporting and Surveillance in Ethiopia

    Directory of Open Access Journals (Sweden)

    Tariku Jibat Beyene

    2018-01-01

    Full Text Available Accurate disease reporting, ideally in near real time, is a prerequisite to detecting disease outbreaks and implementing appropriate measures for their control. This study compared the performance of the traditional paper-based approach to animal disease reporting in Ethiopia to one using an application running on smartphones. In the traditional approach, the total number of cases for each disease or syndrome was aggregated by animal species and reported to each administrative level at monthly intervals; while in the case of the smartphone application demographic information, a detailed list of presenting signs, in addition to the putative disease diagnosis were immediately available to all administrative levels via a Cloud-based server. While the smartphone-based approach resulted in much more timely reporting, there were delays due to limited connectivity; these ranged on average from 2 days (in well-connected areas up to 13 days (in more rural locations. We outline the challenges that would likely be associated with any widespread rollout of a smartphone-based approach such as the one described in this study but demonstrate that in the long run the approach offers significant benefits in terms of timeliness of disease reporting, improved data integrity and greatly improved animal disease surveillance.

  12. Evaluation of dynamic message signs and their potential impact on traffic flow : [research summary].

    Science.gov (United States)

    2013-04-01

    The objective of this research was to understand the potential impact of DMS messages on traffic : flow and evaluate their accuracy, timeliness, relevance and usefulness. Additionally, Bluetooth : sensors were used to track and analyze the diversion ...

  13. Task-relevant cognitive and motor functions are prioritized during prolonged speed-accuracy motor task performance.

    Science.gov (United States)

    Solianik, Rima; Satas, Andrius; Mickeviciene, Dalia; Cekanauskaite, Agne; Valanciene, Dovile; Majauskiene, Daiva; Skurvydas, Albertas

    2018-06-01

    This study aimed to explore the effect of prolonged speed-accuracy motor task on the indicators of psychological, cognitive, psychomotor and motor function. Ten young men aged 21.1 ± 1.0 years performed a fast- and accurate-reaching movement task and a control task. Both tasks were performed for 2 h. Despite decreased motivation, and increased perception of effort as well as subjective feeling of fatigue, speed-accuracy motor task performance improved during the whole period of task execution. After the motor task, the increased working memory function and prefrontal cortex oxygenation at rest and during conflict detection, and the decreased efficiency of incorrect response inhibition and visuomotor tracking were observed. The speed-accuracy motor task increased the amplitude of motor-evoked potentials, while grip strength was not affected. These findings demonstrate that to sustain the performance of 2-h speed-accuracy task under conditions of self-reported fatigue, task-relevant functions are maintained or even improved, whereas less critical functions are impaired.

  14. A case for inherent geometric and geodetic accuracy in remotely sensed VNIR and SWIR imaging products

    Science.gov (United States)

    Driver, J. M.

    1982-01-01

    Significant aberrations can occur in acquired images which, unless compensated on board the spacecraft, can seriously impair throughput and timeliness for typical Earth observation missions. Conceptual compensations options are advanced to enable acquisition of images with inherent geometric and geodetic accuracy. Research needs are identified which, when implemented, can provide inherently accurate images. Agressive pursuit of these research needs is recommended.

  15. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    Directory of Open Access Journals (Sweden)

    Emma Wells

    Full Text Available To prevent transmission in Ebola Virus Disease (EVD outbreaks, it is recommended to disinfect living things (hands and people with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH, sodium dichloroisocyanurate (NaDCC, and sodium hypochlorite (NaOCl have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1 determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2 conducting volunteer testing to assess ease-of-use; and, 3 determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method, then DPD dilution methods (2.4-19% error, then test strips (5.2-48% error; precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources, and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed. Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration

  16. Completeness and timeliness of vaccination and determinants for low and late uptake among young children in eastern China

    Science.gov (United States)

    Hu, Yu; Chen, Yaping; Guo, Jing; Tang, Xuewen; Shen, Lingzhi

    2014-01-01

    Background: We studied completeness and timeliness of vaccination and determinants for low and delayed uptake in children born between 2008 and 2009 in Zhejiang province in eastern China. Methods: We used data from a cross-sectional cluster survey conducted in 2011, which included 1146 children born from 1 Jan 2008 to 31 Dec 2009. Various vaccination history, social-demographic factors, attitude and satisfaction toward immunization from caregivers were collected by a standard questionnaire. We restricted to the third dose of HepB, PV, and DPT (HepB3, PV3, and DPT3) as outcome variables for completeness of vaccination and restricted to the first dose of HepB, PV, DPT, and MCV(HepB1, PV1, DPT1, and MCV1) as outcome variables for timeliness of vaccination. The χ2 test and logistic regression analysis were applied to identify the determinants of completeness and timeliness of vaccination. Survival analysis by the Kaplan–Meier method was performed to present the timeliness vaccination. Results: Coverage for HepB1, HepB3, PV1, PV3, DPT1, DPT3, and MCV1 was 93.22%, 90.15%, 96.42%, 91.63%, 95.80%, 90.16%, and 92.70%, respectively. Timely vaccination occurred in 501/1146(43.72%) children for HepB1, 520/1146(45.38%) for PV1, 511/1146(44.59%) for DPT1, and 679/1146(59.25%) for MCV1. Completeness of specific vaccines was associated with mother’ age, immigration status, birth place of child, maternal education level, maternal occupation status, socio-economic development level of surveyed areas, satisfaction toward immunization service and distance of the house to immunization clinic. Timeliness of vaccination for specific vaccines was associated with mother’ age, maternal education level, immigration status, siblings, birth place, and distance of the house to immunization clinic. Conclusion: Despite reasonably high vaccination coverage, we observed substantial vaccination delays. We found specific factors associated with low and/or delayed vaccine uptake. These findings

  17. 75 FR 57253 - Submission for OMB Review; Comment Request

    Science.gov (United States)

    2010-09-20

    ... and expanded data on the income and general economic and financial situation of the U.S. population... context of several goals--cost reduction and improved accuracy, relevance, timeliness, reduced burden on... incentive, a newsletter reporting findings from the 2008 SIPP Panel, or no contact between interview periods...

  18. Validation of the Six Sigma Z-score for the quality assessment of clinical laboratory timeliness.

    Science.gov (United States)

    Ialongo, Cristiano; Bernardini, Sergio

    2018-03-28

    The International Federation of Clinical Chemistry and Laboratory Medicine has introduced in recent times the turnaround time (TAT) as mandatory quality indicator for the postanalytical phase. Classic TAT indicators, namely, average, median, 90th percentile and proportion of acceptable test (PAT), are in use since almost 40 years and to date represent the mainstay for gauging the laboratory timeliness. In this study, we investigated the performance of the Six Sigma Z-score, which was previously introduced as a device for the quantitative assessment of timeliness. A numerical simulation was obtained modeling the actual TAT data set using the log-logistic probability density function. Five thousand replicates for each size of the artificial TAT random sample (n=20, 50, 250 and 1000) were generated, and different laboratory conditions were simulated manipulating the PDF in order to generate more or less variable data. The Z-score and the classic TAT indicators were assessed for precision (%CV), robustness toward right-tailing (precision at different sample variability), sensitivity and specificity. Z-score showed sensitivity and specificity comparable to PAT (≈80% with n≥250), but superior precision that ranged within 20% by moderately small sized samples (n≥50); furthermore, Z-score was less affected by the value of the cutoff used for setting the acceptable TAT, as well as by the sample variability that reflected into the magnitude of right-tailing. The Z-score was a valid indicator of laboratory timeliness and a suitable device to improve as well as to maintain the achieved quality level.

  19. Improving the timeliness of procedures in a pediatric endoscopy suite.

    Science.gov (United States)

    Tomer, Gitit; Choi, Steven; Montalvo, Andrea; Sutton, Sheila; Thompson, John; Rivas, Yolanda

    2014-02-01

    Pediatric endoscopic procedures are essential in the evaluation and treatment of gastrointestinal diseases in children. Although pediatric endoscopists are greatly interested in increasing efficiency and through-put in pediatric endoscopy units, there is scarcely any literature on this critical process. The goal of this study was to improve the timeliness of pediatric endoscopy procedures at Children's Hospital at Montefiore. In June 2010, a pediatric endoscopy quality improvement initiative was formed at Children's Hospital at Montefiore. We identified patient-, equipment-, and physician-related causes for case delays. Pareto charts, cause and effect diagrams, process flow mapping, and statistical process control charts were used for analysis. From June 2010 to December 2012, we were able to significantly decrease the first case endoscopy delay from an average of 17 to 10 minutes (P < .001), second case delay from 39 to 25 minutes (P = .01), third case delay from 61 to 45 minutes (P = .05), and fourth case delay from 79 to 51 minutes (P = .05). Total delay time decreased from 196 to 131 minutes, resulting in a reduction of 65 minutes (P = .02). From June 2010 to August 2011 (preintervention period), an average of 36% of first endoscopy cases started within 5 minutes, 51% within 10 minutes, and 61% within 15 minutes of the scheduled time. From September 2011 to December 2012 (postintervention period), the percentage of cases starting within 5 minutes, 10 minutes, and 15 minutes increased to 47% (P = .07), 61% (P = .04), and 79% (P = .01), respectively. Applying quality improvement methods and tools helped improve pediatric endoscopy timeliness and significantly decreased total delays.

  20. The Effect of Ratio, Issuance of Stocks and Auditors’ Quality toward the Timeliness of Financial Reporting on the Internet by Consumer Goods Sector Companies in Indonesia

    Directory of Open Access Journals (Sweden)

    Lidiyawati Lidiyawati

    2015-11-01

    Full Text Available This study was conducted to analyze the factors that affect the timeliness of financial reporting on the Internet in the Consumer Goods sector companies listed in Indonesia Stock Exchange (IDX. Variables used were leverage, profitability, size of company, the issuance of stock and the quality of auditors. Data analysis method used was logistic regression at the 0.05 level. The data used were secondary data and using sample Consumer Goods companies listed in the Indonesia Stock Exchange in 2010-2012. This study tested the effect of variable leverage, profitability, firm size, auditor quality stocks, and the timeliness of financial reporting on the Internet. The results obtained from these tests support the timeliness of audit quality of financial reporting on theInternet. However, other variables such as leverage, profitability, firm size, stock issuance did not support the timeliness of financial reporting on the Internet.

  1. Detection of relevant colonic neoplasms with PET/CT: promising accuracy with minimal CT dose and a standardised PET cut-off

    Energy Technology Data Exchange (ETDEWEB)

    Luboldt, Wolfgang [Multiorgan Screening Foundation, Frankfurt (Germany); University Hospital Frankfurt, Department of Radiology, Frankfurt am Main (Germany); University Hospital Dresden, Clinic and Policlinic of Nuclear Medicine, Dresden (Germany); Volker, Teresa; Zoephel, Klaus; Kotzerke, Joerg [University Hospital Dresden, Clinic and Policlinic of Nuclear Medicine, Dresden (Germany); Wiedemann, Baerbel [University Hospital Dresden, Institute of Medical Informatics and Biometrics, Dresden (Germany); Wehrmann, Ursula [University Hospital Dresden, Clinic and Policlinic of Surgery, Dresden (Germany); Koch, Arne; Abolmaali, Nasreddin [University Hospital Dresden, Oncoray, Dresden (Germany); Toussaint, Todd; Luboldt, Hans-Joachim [Multiorgan Screening Foundation, Frankfurt (Germany); Middendorp, Markus; Gruenwald, Frank [University Hospital Frankfurt, Department of Nuclear Medicine, Frankfurt (Germany); Aust, Daniela [University Hospital Dresden, Department of Pathology, Dresden (Germany); Vogl, Thomas J. [University Hospital Frankfurt, Department of Radiology, Frankfurt am Main (Germany)

    2010-09-15

    To determine the performance of FDG-PET/CT in the detection of relevant colorectal neoplasms (adenomas {>=}10 mm, with high-grade dysplasia, cancer) in relation to CT dose and contrast administration and to find a PET cut-off. 84 patients, who underwent PET/CT and colonoscopy (n=79)/sigmoidoscopy (n=5) for (79 x 6+5 x 2)=484 colonic segments, were included in a retrospective study. The accuracy of low-dose PET/CT in detecting mass-positive segments was evaluated by ROC analysis by two blinded independent reviewers relative to contrast-enhanced PET/CT. On a per-lesion basis characteristic PET values were tested as cut-offs. Low-dose PET/CT and contrast-enhanced PET/CT provide similar accuracies (area under the curve for the average ROC ratings 0.925 vs. 0.929, respectively). PET demonstrated all carcinomas (n=23) and 83% (30/36) of relevant adenomas. In all carcinomas and adenomas with high-grade dysplasia (n=10) the SUV{sub max} was {>=}5. This cut-off resulted in a better per-segment sensitivity and negative predictive value (NPV) than the average PET/CT reviews (sensitivity: 89% vs. 82%; NPV: 99% vs. 98%). All other tested cut-offs were inferior to the SUV{sub max}. FDG-PET/CT provides promising accuracy for colorectal mass detection. Low dose and lack of iodine contrast in the CT component do not impact the accuracy. The PET cut-off SUV{sub max}{>=} 5 improves the accuracy. (orig.)

  2. Detection of relevant colonic neoplasms with PET/CT: promising accuracy with minimal CT dose and a standardised PET cut-off

    International Nuclear Information System (INIS)

    Luboldt, Wolfgang; Volker, Teresa; Zoephel, Klaus; Kotzerke, Joerg; Wiedemann, Baerbel; Wehrmann, Ursula; Koch, Arne; Abolmaali, Nasreddin; Toussaint, Todd; Luboldt, Hans-Joachim; Middendorp, Markus; Gruenwald, Frank; Aust, Daniela; Vogl, Thomas J.

    2010-01-01

    To determine the performance of FDG-PET/CT in the detection of relevant colorectal neoplasms (adenomas ≥10 mm, with high-grade dysplasia, cancer) in relation to CT dose and contrast administration and to find a PET cut-off. 84 patients, who underwent PET/CT and colonoscopy (n=79)/sigmoidoscopy (n=5) for (79 x 6+5 x 2)=484 colonic segments, were included in a retrospective study. The accuracy of low-dose PET/CT in detecting mass-positive segments was evaluated by ROC analysis by two blinded independent reviewers relative to contrast-enhanced PET/CT. On a per-lesion basis characteristic PET values were tested as cut-offs. Low-dose PET/CT and contrast-enhanced PET/CT provide similar accuracies (area under the curve for the average ROC ratings 0.925 vs. 0.929, respectively). PET demonstrated all carcinomas (n=23) and 83% (30/36) of relevant adenomas. In all carcinomas and adenomas with high-grade dysplasia (n=10) the SUV max was ≥5. This cut-off resulted in a better per-segment sensitivity and negative predictive value (NPV) than the average PET/CT reviews (sensitivity: 89% vs. 82%; NPV: 99% vs. 98%). All other tested cut-offs were inferior to the SUV max . FDG-PET/CT provides promising accuracy for colorectal mass detection. Low dose and lack of iodine contrast in the CT component do not impact the accuracy. The PET cut-off SUV max ≥ 5 improves the accuracy. (orig.)

  3. In the journalism, between timeliness and recurrence: a long term event

    Directory of Open Access Journals (Sweden)

    Angela Zamin

    2011-12-01

    Full Text Available The text presents an analysis exercise concerning the production of a long term event which, for its presence over time, allows the observation of timeliness and recurrence. It is about the exam of what was produced by the Colombian reference newspaper El Tiempo, between march of 2008 and march of 2010, on the diplomatic crisis between Colombia and Equator, triggered by the Colombian military incursion in Ecuadorian territory. Such analysis also considers the problematic fields that emerge and the return of meaning frames provoked by events that succeed each other.

  4. Socio-economic determinants and inequities in coverage and timeliness of early childhood immunisation in rural Ghana

    NARCIS (Netherlands)

    Gram, Lu; Soremekun, Seyi; ten Asbroek, Augustinus; Manu, Alexander; O'Leary, Maureen; Hill, Zelee; Danso, Samuel; Amenga-Etego, Seeba; Owusu-Agyei, Seth; Kirkwood, Betty R.

    2014-01-01

    To assess the extent of socio-economic inequity in coverage and timeliness of key childhood immunisations in Ghana. Secondary analysis of vaccination card data collected from babies born between January 2008 and January 2010 who were registered in the surveillance system supporting the ObaapaVita

  5. Timeliness of Surveillance during Outbreak of Shiga Toxin–producing Escherichia coli Infection, Germany, 2011

    OpenAIRE

    Altmann, Mathias; Wadl, Maria; Altmann, Doris; Benzler, Justus; Eckmanns, Tim; Krause, Gérard; Spode, Anke; an der Heiden, Matthias

    2011-01-01

    In the context of a large outbreak of Shiga toxin–producing Escherichia coli O104:H4 in Germany, we quantified the timeliness of the German surveillance system for hemolytic uremic syndrome and Shiga toxin–producing E. coli notifiable diseases during 2003–2011. Although reporting occurred faster than required by law, potential for improvement exists at all levels of the information chain.

  6. Identifying and Prioritizing the Effective Parameters on Lack of Timeliness of Operations of Sugarcane Production using Analytical Hierarchy Process (AHP

    Directory of Open Access Journals (Sweden)

    N Monjezi

    2017-10-01

    Full Text Available Introduction Planning and scheduling of farming mechanized operations is very important. If the operation is not performed on time, yield will be reduced. Also for sugarcane, any delay in crop planting and harvesting operations reduces the yield. The most useful priority setting method for agricultural projects is the analytic hierarchy process (AHP. So, this article presents an introductry application manner of the Analytical Hierarchy Process (AHP as a mostly common method of setting agricultural projects priorities. Analytic Hierarchy process (AHP is a decision making algorithm developed by Dr. Saatyin 1980. It has many applications as documented in Decision Support System literature. Currently, this technique is widely used in complicated management decision makings which AHP was preferred from other established methodologies as it does not demand prior knowledge of the utility function; it is based on a hierarchy of criteria and attributes reflecting the understanding of the problem, and finally, because it allows relative and absolute comparisons, thus making this method a very robust tool. The purpose of this research is to identify and prioritize the effective parameters on lack of timeliness of operations of sugarcane production using AHP in Khuzestan province of Iran. Materials and Methods The effective parameters effecting on lack of timeliness of operations have been defined based on expert’s opinions. A questionnaire and personal interviews have formed the basis of this research. The study was applied to a panel of qualified informants made up of fourteen experts. Those interviewed were distributed in Sugarcane Development and By-products Company in 2013-2014. Then, by using the Analytical hierarchy process, a questionnaire was designed for defining the weight and importance of parameters affecting on lack of timeliness of operations. For this method of evaluation, three main criteria considered were yield criteria, cost criteria

  7. The effect of nurse navigation on timeliness of breast cancer care at an academic comprehensive cancer center.

    Science.gov (United States)

    Basu, Mohua; Linebarger, Jared; Gabram, Sheryl G A; Patterson, Sharla Gayle; Amin, Miral; Ward, Kevin C

    2013-07-15

    A patient navigation process is required for accreditation by the National Accreditation Program for Breast Centers (NAPBC). Patient navigation has previously been shown to improve timely diagnosis in patients with breast cancer. This study sought to assess the effect of nurse navigation on timeliness of care following the diagnosis of breast cancer by comparing patients who were treated in a comprehensive cancer center with and without the assistance of nurse navigation. Navigation services were initiated at an NAPBC-accredited comprehensive breast center in July 2010. Two 9-month study intervals were chosen for comparison of timeliness of care: October 2009 through June 2010 and October 2010 through June 2011. All patients with breast cancer diagnosed in the cancer center with stage 0 to III disease during the 2 study periods were identified by retrospective cancer registry review. Time from diagnosis to initial oncology consultation was measured in business days, excluding holidays and weekends. Overall, 176 patients met inclusion criteria: 100 patients prior to and 76 patients following nurse navigation implementation. Nurse navigation was found to significantly shorten time to consultation for patients older than 60 years (B = -4.90, P = .0002). There was no change in timeliness for patients 31 to 60 years of age. Short-term analysis following navigation implementation showed decreased time to consultation for older patients, but not younger patients. Further studies are indicated to assess the long-term effects and durability of this quality improvement initiative. © 2013 American Cancer Society.

  8. A strategy for optimizing staffing to improve the timeliness of inpatient phlebotomy collections.

    Science.gov (United States)

    Morrison, Aileen P; Tanasijevic, Milenko J; Torrence-Hill, Joi N; Goonan, Ellen M; Gustafson, Michael L; Melanson, Stacy E F

    2011-12-01

    The timely availability of inpatient test results is a key to physician satisfaction with the clinical laboratory, and in an institution with a phlebotomy service may depend on the timeliness of blood collections. In response to safety reports filed for delayed phlebotomy collections, we applied Lean principles to the inpatient phlebotomy service at our institution. Our goal was to improve service without using additional resources by optimizing our staffing model. To evaluate the effect of a new phlebotomy staffing model on the timeliness of inpatient phlebotomy collections. We compared the median time of morning blood collections and average number of safety reports filed for delayed phlebotomy collections during a 6-month preimplementation period and 5-month postimplementation period. The median time of morning collections was 17 minutes earlier after implementation (7:42 am preimplementation; interquartile range, 6:27-8:48 am; versus 7:25 am postimplementation; interquartile range, 6:20-8:26 am). The frequency of safety reports filed for delayed collections decreased 80% from 10.6 per 30 days to 2.2 per 30 days. Reallocating staff to match the pattern of demand for phlebotomy collections throughout the day represents a strategy for improving the performance of an inpatient phlebotomy service.

  9. The Importance of Accuracy, Stimulating writing, and Relevance in Middle School Science Textbook Writing

    Science.gov (United States)

    Hubisz, John

    2004-05-01

    While accuracy in Middle School science texts is most important, the texts should also read well, stimulating the student to want to go on, and the material must be relevant to the subject at hand as the typical student is not yet prepared to ignore that which is irrelevant. We know that children will read if the material is of interest (witness The Lord of the Rings and the Harry Potter book sales) and so we must write in a way that stimulates the student to want to examine the subject further and eliminate that which adds nothing to the discipline. Examples of the good and the bad will be presented.

  10. THE TIMELINESS OF FINANCIAL REPORTING IN THE CONTEXT OF EUROPEAN UNION’S EMERGING ECONOMIES

    Directory of Open Access Journals (Sweden)

    Andra GAJEVSZKY

    2013-12-01

    Full Text Available Purpose- This research aims to investigate the timeliness of financial statements of the companies across the European Union‘s emerging economies. Research Design- Out of the emerging economies from European Union, the following sample was constituted: the companies listed on Bucharest Stock Exchange, Warsaw Stock Exchange, Prague Stock Exchange and Budapest Stock Exchange, no matter what tier. The final sample, after eliminating the financial institutions and the entities which were not listed in all the studied years (2008-2012, consists of 37 companies. Findings- While comparing the results of this research with those from prior literature, it can be noticed a slightly decrease of days delay in the case of the analyzed emerging economies. Moreover, consistent with other researchers` findings, companies audited by a Big 4 auditor and with a qualified opinion in the auditor`s report, publish their financial results later than entities which have a favourable audit opinion. Value/Practical Implications- This study highlights the importance of financial statements` timeliness in the context of four European Union`s emerging economies, economies which are known for their delay in publishing their financial results compared to the market economies.

  11. Timeliness of Surveillance during Outbreak of Shiga Toxin–producing Escherichia coli Infection, Germany, 2011

    Science.gov (United States)

    Wadl, Maria; Altmann, Doris; Benzler, Justus; Eckmanns, Tim; Krause, Gérard; Spode, Anke; an der Heiden, Matthias

    2011-01-01

    In the context of a large outbreak of Shiga toxin–producing Escherichia coli O104:H4 in Germany, we quantified the timeliness of the German surveillance system for hemolytic uremic syndrome and Shiga toxin–producing E. coli notifiable diseases during 2003–2011. Although reporting occurred faster than required by law, potential for improvement exists at all levels of the information chain. PMID:22000368

  12. Analyzing Influential Factors Against Timeliness of Financial Reporting (Empirical Study of Automation and Components and Telecommunication Companies Listed on Indonesia Stock Exchange.

    Directory of Open Access Journals (Sweden)

    Joko Suryanto

    2016-12-01

    Full Text Available This research aims to examine the effect of the relationship between firm size, profitability, solvency, public ownership, and the audit opinion on the timeliness of financial reporting. The dependent variable in the form of timekeeping company deliver the financial statements to the Stock Exchange. Meanwhile for the independent variables such as firm size measured by total asets of the company, profitability is measured by profit margin ratio, solvency measured by debt-to-equity ratio, public ownership is measured by the percentage of the number of shares owned by the community, and the audit opinion is measured with an unqualified opinion and otherwise unqualified. This study uses secondary data with population automotive companies and telecommunications components and annual financial statements issued on the Stock Exchange in the period 2010-2012. From the analysis conducted in this study it can be concluded that the size of the company significantly influence the timeliness of financial reporting. While profitability, solvency, public ownership, and the audit opinion does not affect the timeliness of financial reporting.

  13. A Method for The Assessing of Reliability Characteristics Relevant to an Assumed Position-Fixing Accuracy in Navigational Positioning Systems

    Directory of Open Access Journals (Sweden)

    Specht Cezary

    2016-09-01

    Full Text Available This paper presents a method which makes it possible to determine reliability characteristics of navigational positioning systems, relevant to an assumed value of permissible error in position fixing. The method allows to calculate: availability , reliability as well as operation continuity of position fixing system for an assumed, determined on the basis of formal requirements - both worldwide and national, position-fixing accuracy. The proposed mathematical model allows to satisfy, by any navigational positioning system, not only requirements as to position-fixing accuracy of a given navigational application (for air , sea or land traffic but also the remaining characteristics associated with technical serviceability of a system.

  14. Impact of two interventions on timeliness and data quality of an electronic disease surveillance system in a resource limited setting (Peru: a prospective evaluation

    Directory of Open Access Journals (Sweden)

    Quispe Jose A

    2009-03-01

    Full Text Available Abstract Background A timely detection of outbreaks through surveillance is needed in order to prevent future pandemics. However, current surveillance systems may not be prepared to accomplish this goal, especially in resource limited settings. As data quality and timeliness are attributes that improve outbreak detection capacity, we assessed the effect of two interventions on such attributes in Alerta, an electronic disease surveillance system in the Peruvian Navy. Methods 40 Alerta reporting units (18 clinics and 22 ships were included in a 12-week prospective evaluation project. After a short refresher course on the notification process, units were randomly assigned to either a phone, visit or control group. Phone group sites were called three hours before the biweekly reporting deadline if they had not sent their report. Visit group sites received supervision visits on weeks 4 & 8, but no phone calls. The control group sites were not contacted by phone or visited. Timeliness and data quality were assessed by calculating the percentage of reports sent on time and percentage of errors per total number of reports, respectively. Results Timeliness improved in the phone group from 64.6% to 84% in clinics (+19.4 [95% CI, +10.3 to +28.6]; p Conclusion Regular phone reminders significantly improved timeliness of reports in clinics and ships, whereas supervision visits led to improved data quality only among clinics. Further investigations are needed to establish the cost-effectiveness and optimal use of each of these strategies.

  15. Utilizing distributional analytics and electronic records to assess timeliness of inpatient blood glucose monitoring in non-critical care wards

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2016-04-01

    Full Text Available Abstract Background Regular and timely monitoring of blood glucose (BG levels in hospitalized patients with diabetes mellitus is crucial to optimizing inpatient glycaemic control. However, methods to quantify timeliness as a measurement of quality of care are lacking. We propose an analytical approach that utilizes BG measurements from electronic records to assess adherence to an inpatient BG monitoring protocol in hospital wards. Methods We applied our proposed analytical approach to electronic records obtained from 24 non-critical care wards in November and December 2013 from a tertiary care hospital in Singapore. We applied distributional analytics to evaluate daily adherence to BG monitoring timings. A one-sample Kolmogorov-Smirnov (1S-KS test was performed to test daily BG timings against non-adherence represented by the uniform distribution. This test was performed among wards with high power, determined through simulation. The 1S-KS test was coupled with visualization via the cumulative distribution function (cdf plot and a two-sample Kolmogorov-Smirnov (2S-KS test, enabling comparison of the BG timing distributions between two consecutive days. We also applied mixture modelling to identify the key features in daily BG timings. Results We found that 11 out of the 24 wards had high power. Among these wards, 1S-KS test with cdf plots indicated adherence to BG monitoring protocols. Integrating both 1S-KS and 2S-KS information within a moving window consisting of two consecutive days did not suggest frequent potential change from or towards non-adherence to protocol. From mixture modelling among wards with high power, we consistently identified four components with high concentration of BG measurements taken before mealtimes and around bedtime. This agnostic analysis provided additional evidence that the wards were adherent to BG monitoring protocols. Conclusions We demonstrated the utility of our proposed analytical approach as a monitoring

  16. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    International Nuclear Information System (INIS)

    Debono, Josephine C; Poulos, Ann E; Houssami, Nehmat; Turner, Robin M; Boyages, John

    2015-01-01

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes

  17. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    Energy Technology Data Exchange (ETDEWEB)

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au [Westmead Breast Cancer Institute, Westmead, New South Wales (Australia); Poulos, Ann E [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, Lidcombe, New South Wales (Australia); Houssami, Nehmat [Screening and Test Evaluation Program, School of Public Health (A27), Sydney Medical School, University of Sydney, Sydney, New South Wales (Australia); Turner, Robin M [School of Public Health and Community Medicine, University of New South Wales, Sydney, New South Wales (Australia); Boyages, John [Macquarie University Cancer Institute, Macquarie University Hospital, Australian School of Advanced Medicine, Macquarie University, Sydney, New South Wales (Australia); Westmead Breast Cancer Institute, Westmead, New South Wales (Australia)

    2015-03-15

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.

  18. Using internet search engines and library catalogs to locate toxicology information.

    Science.gov (United States)

    Wukovitz, L D

    2001-01-12

    The increasing importance of the Internet demands that toxicologists become aquainted with its resources. To find information, researchers must be able to effectively use Internet search engines, directories, subject-oriented websites, and library catalogs. The article will explain these resources, explore their benefits and weaknesses, and identify skills that help the researcher to improve search results and critically evaluate sources for their relevancy, validity, accuracy, and timeliness.

  19. Inequity in Timeliness of MMR Vaccination in Children Living in the Suburbs of Iranian Cities.

    Science.gov (United States)

    Jadidi, Rahmatollah; Mohammadbeigi, Abolfazl; Mohammadsalehi, Narges; Ansari, Hossein; Ghaderi, Ebrahim

    2015-06-01

    High coverage of immunization is one of the indicators of good performance of health system but timely vaccination is another indicator which is associated with protective effect of vaccines. The present study aimed at evaluating the inequity in timely vaccination with a focus on inequities in timeliness by gender, birth order, parents' education and place of residence (rural or urban). A historical cohort study was conducted on children of 24-47 months of age who were living in the suburbs of big cities in Iran and were selected through stratified proportional sampling method. Only children who had vaccine cards -i.e. 3610 children -were included in data analysis. The primary outcome was age-appropriate vaccination of MMR1. Inequity was measured by Concentration Index (C) and Relative Index of Inequity (RII). Inequity indexes were calculated according to the mother and father's education, child birth order, child's sex and the family's place of residence at the time of vaccination. The overall on-time MMR1 vaccination was 70% and 54.4% for Iranians and Non-Iranians, respectively. The C index of mother and father's education for timely MMR vaccination was 0.023 and was 0.029 in Iranian children as well as 0.044 and 0.019 for non-Iranians, respectively. The C index according to child order in Iranians and Non-Iranians was 0.025 and C=0.078. With regard to children who lived in cities, the on-time vaccination was 0.36% and 0.29% higher than that in rural areas . In male children it was 0.12% and 0.14% higher than that in female children for Iranians and Non-Iranians, respectively. Timeliness MMR vaccination in Iranian children is higher than that in non-Iranian children. Regarding the existence of differences in timely vaccination rate in all Iranian and Non-Iranian children, no evidence was observed for inequity by focusing on parents' education, birth order, gender or place of residence. So, increasing timeliness of vaccination for enhancing the protective effect

  20. Predictors of Uptake and Timeliness of Newly Introduced Pneumococcal and Rotavirus Vaccines, and of Measles Vaccine in Rural Malawi: A Population Cohort Study.

    Directory of Open Access Journals (Sweden)

    Hazzie Mvula

    Full Text Available Malawi introduced pneumococcal conjugate vaccine (PCV13 and monovalent rotavirus vaccine (RV1 in 2011 and 2012 respectively, and is planning the introduction of a second-dose measles vaccine (MV. We assessed predictors of availability, uptake and timeliness of these vaccines in a rural Malawian setting.Commencing on the first date of PCV13 eligibility we conducted a prospective population-based birth cohort study of 2,616 children under demographic surveillance in Karonga District, northern Malawi who were eligible for PCV13, or from the date of RV1 introduction both PCV13 and RV1. Potential predictors of vaccine uptake and timeliness for PCV13, RV1 and MV were analysed respectively using robust Poisson and Cox regression.Vaccine coverage was high for all vaccines, ranging from 86.9% for RV1 dose 2 to 95.4% for PCV13 dose 1. Median time delay for PCV13 dose 1 was 17 days (IQR 7-36, 19 days (IQR 8-36 for RV1 dose 1 and 20 days (IQR 3-46 for MV. Infants born to lower educated or farming mothers and those living further away from the road or clinic were at greater risk of being not fully vaccinated and being vaccinated late. Delays in vaccination were also associated with non-facility birth. Vaccine stock-outs resulted in both a delay in vaccine timeliness and in a decrease in completion of schedule.Despite high vaccination coverage in this setting, delays in vaccination were common. We identified programmatic and socio-demographic risk factors for uptake and timeliness of vaccination. Understanding who remains most vulnerable to be unvaccinated allows for focussed delivery thereby increasing population coverage and maximising the equitable benefits of universal vaccination programmes.

  1. The Impact of System Level Factors on Treatment Timeliness: Utilizing the Toyota Production System to Implement Direct Intake Scheduling in a Semi-Rural Community Mental Health Clinic

    Science.gov (United States)

    Weaver, A.; Greeno, C.G.; Goughler, D.H.; Yarzebinski, K.; Zimmerman, T.; Anderson, C.

    2013-01-01

    This study examined the effect of using the Toyota Production System (TPS) to change intake procedures on treatment timeliness within a semi-rural community mental health clinic. One hundred randomly selected cases opened the year before the change and one hundred randomly selected cases opened the year after the change were reviewed. An analysis of covariance (ANCOVA) demonstrated that changing intake procedures significantly decreased the number of days consumers waited for appointments (F(1,160)=4.9; p=.03) from an average of 11 days to 8 days. The pattern of difference on treatment timeliness was significantly different between adult and child programs (F(1,160)=4.2; p=.04), with children waiting an average of 4 days longer than adults for appointments. Findings suggest that small system level changes may elicit important changes and that TPS offers a valuable model to improve processes within community mental health settings. Results also indicate that different factors drive adult and children’s treatment timeliness. PMID:23576137

  2. The impact of system level factors on treatment timeliness: utilizing the Toyota Production System to implement direct intake scheduling in a semi-rural community mental health clinic.

    Science.gov (United States)

    Weaver, Addie; Greeno, Catherine G; Goughler, Donald H; Yarzebinski, Kathleen; Zimmerman, Tina; Anderson, Carol

    2013-07-01

    This study examined the effect of using the Toyota Production System (TPS) to change intake procedures on treatment timeliness within a semi-rural community mental health clinic. One hundred randomly selected cases opened the year before the change and 100 randomly selected cases opened the year after the change were reviewed. An analysis of covariance demonstrated that changing intake procedures significantly decreased the number of days consumers waited for appointments (F(1,160) = 4.9; p = .03) from an average of 11 to 8 days. The pattern of difference on treatment timeliness was significantly different between adult and child programs (F(1,160) = 4.2; p = .04), with children waiting an average of 4 days longer than adults for appointments. Findings suggest that small system level changes may elicit important changes and that TPS offers a valuable model to improve processes within community mental health settings. Results also indicate that different factors drive adult and children's treatment timeliness.

  3. Vaccine Hesitancy Among Caregivers and Association with Childhood Vaccination Timeliness in Addis Ababa, Ethiopia.

    Science.gov (United States)

    Masters, Nina B; Tefera, Yemesrach A; Wagner, Abram L; Boulton, Matthew L

    2018-05-24

    Vaccines are vital to reducing childhood mortality, and prevent an estimated 2 to 3 million deaths annually which disproportionately occur in the developing world. Overall vaccine coverage is typically used as a metric to evaluate the adequacy of vaccine program performance, though it does not account for untimely administration, which may unnecessarily prolong children's susceptibility to disease. This study explored a hypothesized positive association between increasing vaccine hesitancy and untimeliness of immunizations administered under the Expanded Program on Immunization (EPI) in Addis Ababa, Ethiopia. This cross-sectional survey employed a multistage sampling design, randomly selecting one health center within five sub-cities of Addis Ababa. Caregivers of 3 to 12-month-old infants completed a questionnaire on vaccine hesitancy, and their infants' vaccination cards were examined to assess timeliness of received vaccinations. The sample comprised 350 caregivers. Overall, 82.3% of the surveyed children received all recommended vaccines, although only 55.9% of these vaccinations were timely. Few caregivers (3.4%) reported ever hesitating and 3.7% reported ever refusing a vaccine for their child. Vaccine hesitancy significantly increased the odds of untimely vaccination (AOR 1.94, 95% CI: 1.02, 3.71) in the adjusted analysis. This study found high vaccine coverage among a sample of 350 young children in Addis Ababa, though only half received all recommended vaccines on time. High vaccine hesitancy was strongly associated with infants' untimely vaccination, indicating that increased efforts to educate community members and providers about vaccines may have a beneficial impact on vaccine timeliness in Addis Ababa.

  4. Improving Timeliness of Winter Wheat Production Forecast in United States of America, Ukraine and China Using MODIS Data and NCAR Growing Degree Day

    Science.gov (United States)

    Vermote, E.; Franch, B.; Becker-Reshef, I.; Claverie, M.; Huang, J.; Zhang, J.; Sobrino, J. A.

    2014-12-01

    Wheat is the most important cereal crop traded on international markets and winter wheat constitutes approximately 80% of global wheat production. Thus, accurate and timely forecasts of its production are critical for informing agricultural policies and investments, as well as increasing market efficiency and stability. Becker-Reshef et al. (2010) used an empirical generalized model for forecasting winter wheat production. Their approach combined BRDF-corrected daily surface reflectance from Moderate resolution Imaging Spectroradiometer (MODIS) Climate Modeling Grid (CMG) with detailed official crop statistics and crop type masks. It is based on the relationship between the Normalized Difference Vegetation Index (NDVI) at the peak of the growing season, percent wheat within the CMG pixel, and the final yields. This method predicts the yield approximately one month to six weeks prior to harvest. In this study, we include the Growing Degree Day (GDD) information extracted from NCEP/NCAR reanalysis data in order to improve the winter wheat production forecast by increasing the timeliness of the forecasts while conserving the accuracy of the original model. We apply this modified model to three major wheat-producing countries: United States of America, Ukraine and China from 2001 to 2012. We show that a reliable forecast can be made between one month to a month and a half prior to the peak NDVI (meaning two months to two and a half months prior to harvest) while conserving an accuracy of 10% in the production forecast.

  5. The Challenge To Tactical Reconnaissance: Timeliness Through Technology

    Science.gov (United States)

    Stromfors, Richard D.

    1984-12-01

    As you have no doubt gathered from Mr. Henkel's introduction, I have spent over 20 years of my Air Force career involved in the reconnaissance mission either as a tactical reconnaissance pilot, as a tactical reconnaissance inspector, as a writer and speaker on that subject while attending the Air Force Professional Military Education Schools, and currently as the Air Force's operational manager for reconnaissance aircraft. In all of those positions, I've been challenged many times over with what appeared, at first, to be insurmountable problems that upon closer examination weren't irresolvable after all. All of these problems pale, however, when viewed side-by-side with the one challenge that has faced me since I began my military career and, in fact, faces all of us as I talk with you today. That one challenge is the problem of timeliness. Better put: "Getting information to our customers firstest with the mostest." Together we must develop better platforms and sensors to cure this age-old "Achilles heel" in the reconnaissance cycle. Despite all of our best intentions, despite all of the emerging technologies that will be available, and despite all of the dollars that we've thrown at research and development, we in the reconnaissance business still haven't done a good job in this area. We must do better.

  6. The relevance of accuracy of heartbeat perception in noncardiac and cardiac chest pain.

    Science.gov (United States)

    Schroeder, Stefanie; Gerlach, Alexander L; Achenbach, Stephan; Martin, Alexandra

    2015-04-01

    The development and course of noncardiac chest pain are assumed to be influenced by interoceptive processes. It was investigated whether heartbeat perception was enhanced in patients suffering from noncardiac chest pain and to what degree it was associated with self-reported cognitive-perceptual features and chest pain characteristics. A total of 42 patients with noncardiac chest pain (NCCP), 35 patients with cardiac chest pain, and 52 healthy controls were recruited. Heartbeat perception was assessed using the Schandry task and a modified Brener-Kluvitse task. Self-report measures assessed anxiety sensitivity, somatosensory amplification, heart-focused anxiety, and chest pain characteristics. Heartbeat perception was not more accurate in patients with NCCP, compared to patients with cardiac chest pain and healthy controls. However, in patients with NCCP, the error score (Schandry task) was significantly associated with stronger chest pain impairment, and the response bias (Brener-Kluvitse task) was associated with lower chest pain intensity. Against assumptions of current etiological models, heartbeat perception was not enhanced in patients with NCCP. Chest pain characteristics and particularly their appraisal as threatening might be more relevant to NCCP than the perceptional accuracy of cardiac sensations and should be focused in psychological interventions. However, associations with chest pain impairment suggest cardiac interoception to influence the course of NCCP.

  7. Timeliness vaccination of measles containing vaccine and barriers to vaccination among migrant children in East China.

    Directory of Open Access Journals (Sweden)

    Yu Hu

    Full Text Available BACKGROUND: The reported coverage rates of first and second doses of measles containing vaccine (MCV are almost 95% in China, while measles cases are constantly being reported. This study evaluated the vaccine coverage, timeliness, and barriers to immunization of MCV1 and MCV2 in children aged from 8-48 months. METHODS: We assessed 718 children aged 8-48 months, of which 499 children aged 18-48 months in September 2011. Face to face interviews were administered with children's mothers to estimate MCV1 and MCV2 coverage rate, its timeliness and barriers to vaccine uptake. RESULTS: The coverage rates were 76.9% for MCV1 and 44.7% for MCV2 in average. Only 47.5% of surveyed children received the MCV1 timely, which postpone vaccination by up to one month beyond the stipulated age of 8 months. Even if coverage thus improves with time, postponed vaccination adds to the pool of unprotected children in the population. Being unaware of the necessity for vaccination and its schedule, misunderstanding of side-effect of vaccine, and child being sick during the recommended vaccination period were significant preventive factors for both MCV1 and MCV2 vaccination. Having multiple children, mother's education level, household income and children with working mothers were significantly associated with delayed or missing MCV1 immunization. CONCLUSIONS: To avoid future outbreaks, it is crucial to attain high coverage levels by timely vaccination, thus, accurate information should be delivered and a systematic approach should be targeted to high-risk groups.

  8. Novel combined patient instruction and discharge summary tool improves timeliness of documentation and outpatient provider satisfaction

    Directory of Open Access Journals (Sweden)

    Meredith Gilliam

    2017-03-01

    Full Text Available Background: Incomplete or delayed access to discharge information by outpatient providers and patients contributes to discontinuity of care and poor outcomes. Objective: To evaluate the effect of a new electronic discharge summary tool on the timeliness of documentation and communication with outpatient providers. Methods: In June 2012, we implemented an electronic discharge summary tool at our 145-bed university-affiliated Veterans Affairs hospital. The tool facilitates completion of a comprehensive discharge summary note that is available for patients and outpatient medical providers at the time of hospital discharge. Discharge summary note availability, outpatient provider satisfaction, and time between the decision to discharge a patient and discharge note completion were all evaluated before and after implementation of the tool. Results: The percentage of discharge summary notes completed by the time of first post-discharge clinical contact improved from 43% in February 2012 to 100% in September 2012 and was maintained at 100% in 2014. A survey of 22 outpatient providers showed that 90% preferred the new summary and 86% found it comprehensive. Despite increasing required documentation, the time required to discharge a patient, from physician decision to discharge note completion, improved from 5.6 h in 2010 to 4.1 h in 2012 (p = 0.04, and to 2.8 h in 2015 (p < 0.001. Conclusion: The implementation of a novel discharge summary tool improved the timeliness and comprehensiveness of discharge information as needed for the delivery of appropriate, high-quality follow-up care, without adversely affecting the efficiency of the discharge process.

  9. A Sentiment-Enhanced Hybrid Recommender System for Movie Recommendation: A Big Data Analytics Framework

    OpenAIRE

    Wang, Yibo; Wang, Mingming; Xu, Wei

    2018-01-01

    Movie recommendation in mobile environment is critically important for mobile users. It carries out comprehensive aggregation of user’s preferences, reviews, and emotions to help them find suitable movies conveniently. However, it requires both accuracy and timeliness. In this paper, a movie recommendation framework based on a hybrid recommendation model and sentiment analysis on Spark platform is proposed to improve the accuracy and timeliness of mobile movie recommender system. In the propo...

  10. Pengaruh Karakteristik Sistem Informasi Akuntansi Manajemen: Broad Scope, Timeliness, Aggregated, Dan Integrated Terhadap Kinerja Manajerial Umkm. (Studi Pada Umkm Di Desa Wedoro, Kab. Sidoarjo

    Directory of Open Access Journals (Sweden)

    Susi Handayani

    2014-04-01

    Full Text Available SMEs need information systems that are reliable and competent entrepreneurial personality that will have an impact on managerial performance. Information reliable accounting system according Chenhall and Morris (1986 is one that has the characteristics of broad scope, timeliness, aggregation and integration. Information management accounting system that is broad scope is information that attention focus, quantification, and time horizon. Timeliness dimension has two sub dimensions, namely the frequency of reporting and speed of reporting. Dimensions aggregate is a summary of information by function, time periods, and the decision model. Integrated information reflects the lack of coordination between segments of one and the other subunits within the organization. This research is quantitative. This study uses analysis of causal relationships, that is how one variable affects changes in other variables. To analyze the data, this study uses analysis of Structural Equation Modeling (SEM approach Partial Least Square (PLS.In data processing using software Warp PLS. The results showed that the Accounting Information Systems Management is broadscope, timeliness, integrated, and the aggregate effect on Managerial Performance is measured using an instrument of self-rating which is reflected in four indicators, namely increased revenue, cost savings, improved customer satisfaction and increased asset utilization. This shows that although SMEs are the type of business that is not great, but still requires a wide range of information, timely, integrated and comprehensive that can assist managers in making informed decisions that impact the increase managerial performance related to efficiency-related costs but still consider satisfaction customers thus increasing the income of the SMEs in environmental conditions of uncertainty. Keywords: SME, SIAM characteristics, Managerial Performance

  11. Diagnostic timeliness in adolescents and young adults with cancer: a cross-sectional analysis of the BRIGHTLIGHT cohort.

    Science.gov (United States)

    Herbert, Annie; Lyratzopoulos, Georgios; Whelan, Jeremy; Taylor, Rachel M; Barber, Julie; Gibson, Faith; Fern, Lorna A

    2018-03-01

    Adolescents and young adults (AYAs) are thought to experience prolonged intervals to cancer diagnosis, but evidence quantifying this hypothesis and identifying high-risk patient subgroups is insufficient. We aimed to investigate diagnostic timeliness in a cohort of AYAs with incident cancers and to identify factors associated with variation in timeliness. We did a cross-sectional analysis of the BRIGHTLIGHT cohort, which included AYAs aged 12-24 years recruited within an average of 6 months from new primary cancer diagnosis from 96 National Health Service hospitals across England between July 1, 2012, and April 30, 2015. Participants completed structured, face-to-face interviews to provide information on their diagnostic experience (eg, month and year of symptom onset, number of consultations before referral to specialist care); demographic information was extracted from case report forms and date of diagnosis and cancer type from the national cancer registry. We analysed these data to assess patient interval (time from symptom onset to first presentation to a general practitioner [GP] or emergency department), the number of prereferral GP consultations, and the symptom onset-to-diagnosis interval (time from symptom onset to diagnosis) by patient characteristic and cancer site, and examined associations using multivariable regression models. Of 1114 participants recruited to the BRIGHTLIGHT cohort, 830 completed a face-to-face interview. Among participants with available information, 204 (27%) of 748 had a patient interval of more than a month and 242 (35%) of 701 consulting a general practitioner had three or more prereferral consultations. The median symptom onset-to-diagnosis interval was 62 days (IQR 29-153). Compared with male AYAs, female AYAs were more likely to have three or more consultations (adjusted odds ratio [OR] 1·6 [95% CI 1·1-2·3], p=0·0093) and longer median symptom onset-to-diagnosis intervals (adjusted median interval longer by 24 days [95

  12. An email-based intervention to improve the number and timeliness of letters sent from the hospital outpatient clinic to the general practitioner : A pair-randomized controlled trial

    NARCIS (Netherlands)

    Medlock, Stephanie; Parlevliet, Juliette L.; Sent, Danielle; Eslami, Saeid; Askari, Marjan; Arts, Derk L.; Hoekstra, Joost B.; de Rooij, Sophia E.; Abu-Hanna, Ameen

    2017-01-01

    Objective: Letters from the hospital to the general practitioner are important for maintaining continuity of care. Although doctors feel letters are important, they are often not written on time. To improve the number and timeliness of letters sent from the hospital outpatient department to the

  13. An email-based intervention to improve the number and timeliness of letters sent from the hospital outpatient clinic to the general practitioner: A pair-randomized controlled trial

    NARCIS (Netherlands)

    Medlock, Stephanie; Parlevliet, Juliette L.; Sent, Danielle; Eslami, Saeid; Askari, Marjan; Arts, Derk L.; Hoekstra, Joost B.; de Rooij, Sophia E.; Abu-Hanna, Ameen

    2017-01-01

    Letters from the hospital to the general practitioner are important for maintaining continuity of care. Although doctors feel letters are important, they are often not written on time. To improve the number and timeliness of letters sent from the hospital outpatient department to the general

  14. Completeness and timeliness of Salmonella notifications in Ireland in 2008: a cross sectional study

    Directory of Open Access Journals (Sweden)

    Cormican Martin

    2010-09-01

    Full Text Available Abstract Background In Ireland, salmonellosis is the second most common cause of bacterial gastroenteritis. A new electronic system for reporting (Computerised Infectious Disease Reporting - CIDR of Salmonella cases was established in 2004. It collates clinical (and/or laboratory data on confirmed and probable Salmonella cases. The authors studied the completeness and the timeliness of Salmonella notifications in 2008. Methods This analysis was based upon laboratory confirmed cases of salmonella gastroenteritis. Using data contained in CIDR, we examined completeness for certain non-mandatory fields (country of infection, date of onset of illness, organism, outcome, patient type, and ethnicity. We matched the CIDR data with the dataset provided by the national Salmonella reference laboratory (NSRL to which all Salmonella spp. isolates are referred for definitive typing. We calculated the main median time intervals in the flow of events of the notification process. Results In total, 416 laboratory confirmed Salmonella cases were captured by the national surveillance system and the NSRL and were included in the analysis. Completeness of non mandatory fields varied considerably. Organism was the most complete field (98.8%, ethnicity the least (11%. The median time interval between sample collection (first contact of the patient with the healthcare professional to the first notification to the regional Department of Public Health (either a clinical or a laboratory notification was 6 days (Interquartile 4-7 days. The median total identification time interval, time between sample collections to availability of serotyping and phage-typing results on the system was 25 days (Interquartile 19-32 days. Timeliness varied with respect to Salmonella species. Clinical notifications occurred more rapidly than laboratory notifications. Conclusions Further feedback and education should be given to health care professionals to improve completeness of reporting of

  15. Completeness and timeliness of Salmonella notifications in Ireland in 2008: a cross sectional study

    LENUS (Irish Health Repository)

    Nicolay, Nathalie

    2010-09-22

    Abstract Background In Ireland, salmonellosis is the second most common cause of bacterial gastroenteritis. A new electronic system for reporting (Computerised Infectious Disease Reporting - CIDR) of Salmonella cases was established in 2004. It collates clinical (and\\/or laboratory) data on confirmed and probable Salmonella cases. The authors studied the completeness and the timeliness of Salmonella notifications in 2008. Methods This analysis was based upon laboratory confirmed cases of salmonella gastroenteritis. Using data contained in CIDR, we examined completeness for certain non-mandatory fields (country of infection, date of onset of illness, organism, outcome, patient type, and ethnicity). We matched the CIDR data with the dataset provided by the national Salmonella reference laboratory (NSRL) to which all Salmonella spp. isolates are referred for definitive typing. We calculated the main median time intervals in the flow of events of the notification process. Results In total, 416 laboratory confirmed Salmonella cases were captured by the national surveillance system and the NSRL and were included in the analysis. Completeness of non mandatory fields varied considerably. Organism was the most complete field (98.8%), ethnicity the least (11%). The median time interval between sample collection (first contact of the patient with the healthcare professional) to the first notification to the regional Department of Public Health (either a clinical or a laboratory notification) was 6 days (Interquartile 4-7 days). The median total identification time interval, time between sample collections to availability of serotyping and phage-typing results on the system was 25 days (Interquartile 19-32 days). Timeliness varied with respect to Salmonella species. Clinical notifications occurred more rapidly than laboratory notifications. Conclusions Further feedback and education should be given to health care professionals to improve completeness of reporting of non

  16. Mechanized farming in the humid tropics with special reference to soil tillage, workability and timeliness of farm operations : a case study for the Zanderij area of Suriname

    NARCIS (Netherlands)

    Goense, D.

    1987-01-01

    The reported investigations concern aspects of mechanized farming for the production of rainfed crops on the loamy soils of the Zanderij formation in Suriname and in particular, the effect of tillage on crop yield and soil properties, workability of field operations and timeliness of field

  17. Assessing effects of the e-Chasqui laboratory information system on accuracy and timeliness of bacteriology results in the Peruvian tuberculosis program.

    Science.gov (United States)

    Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish

    2007-10-11

    We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui.

  18. Timeliness of contact tracing among flight passengers for influenza A/H1N1 2009

    Directory of Open Access Journals (Sweden)

    Swaan Corien M

    2011-12-01

    Full Text Available Abstract Background During the initial containment phase of influenza A/H1N1 2009, close contacts of cases were traced to provide antiviral prophylaxis within 48 h after exposure and to alert them on signs of disease for early diagnosis and treatment. Passengers seated on the same row, two rows in front or behind a patient infectious for influenza, during a flight of ≥ 4 h were considered close contacts. This study evaluates the timeliness of flight-contact tracing (CT as performed following national and international CT requests addressed to the Center of Infectious Disease Control (CIb/RIVM, and implemented by the Municipal Health Services of Schiphol Airport. Methods Elapsed days between date of flight arrival and the date passenger lists became available (contact details identified - CI was used as proxy for timeliness of CT. In a retrospective study, dates of flight arrival, onset of illness, laboratory diagnosis, CT request and identification of contacts details through passenger lists, following CT requests to the RIVM for flights landed at Schiphol Airport were collected and analyzed. Results 24 requests for CT were identified. Three of these were declined as over 4 days had elapsed since flight arrival. In 17 out of 21 requests, contact details were obtained within 7 days after arrival (81%. The average delay between arrival and CI was 3,9 days (range 2-7, mainly caused by delay in diagnosis of the index patient after arrival (2,6 days. In four flights (19%, contacts were not identified or only after > 7 days. CI involving Dutch airlines was faster than non-Dutch airlines (P Conclusion CT for influenza A/H1N1 2009 among flight passengers was not successful for timely provision of prophylaxis. CT had little additional value for alerting passengers for disease symptoms, as this information already was provided during and after the flight. Public health authorities should take into account patient delays in seeking medical advise and

  19. Timeliness and completeness of measles vaccination among children in rural areas of Guangxi, China: A stratified three-stage cluster survey.

    Science.gov (United States)

    Tang, Xianyan; Geater, Alan; McNeil, Edward; Zhou, Hongxia; Deng, Qiuyun; Dong, Aihu

    2017-07-01

    Large-scale outbreaks of measles occurred in 2013 and 2014 in rural Guangxi, a region in Southwest China with high coverage for measles-containing vaccine (MCV). This study aimed to estimate the timely vaccination coverage, the timely-and-complete vaccination coverage, and the median delay period for MCV among children aged 18-54 months in rural Guangxi. Based on quartiles of measles incidence during 2011-2013, a stratified three-stage cluster survey was conducted from June through August 2015. Using weighted estimation and finite population correction, vaccination coverage and 95% confidence intervals (CIs) were calculated. Weighted Kaplan-Meier analyses were used to estimate the median delay periods for the first (MCV1) and second (MCV2) doses of the vaccine. A total of 1216 children were surveyed. The timely vaccination coverage rate was 58.4% (95% CI, 54.9%-62.0%) for MCV1, and 76.9% (95% CI, 73.6%-80.0%) for MCV2. The timely-and-complete vaccination coverage rate was 47.4% (95% CI, 44.0%-51.0%). The median delay period was 32 (95% CI, 27-38) days for MCV1, and 159 (95% CI, 118-195) days for MCV2. The timeliness and completeness of measles vaccination was low, and the median delay period was long among children in rural Guangxi. Incorporating the timeliness and completeness into official routine vaccination coverage statistics may help appraise the coverage of vaccination in China. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  20. Timeliness and completeness of measles vaccination among children in rural areas of Guangxi, China: A stratified three-stage cluster survey

    Directory of Open Access Journals (Sweden)

    Xianyan Tang

    2017-07-01

    Full Text Available Background: Large-scale outbreaks of measles occurred in 2013 and 2014 in rural Guangxi, a region in Southwest China with high coverage for measles-containing vaccine (MCV. This study aimed to estimate the timely vaccination coverage, the timely-and-complete vaccination coverage, and the median delay period for MCV among children aged 18–54 months in rural Guangxi. Methods: Based on quartiles of measles incidence during 2011–2013, a stratified three-stage cluster survey was conducted from June through August 2015. Using weighted estimation and finite population correction, vaccination coverage and 95% confidence intervals (CIs were calculated. Weighted Kaplan–Meier analyses were used to estimate the median delay periods for the first (MCV1 and second (MCV2 doses of the vaccine. Results: A total of 1216 children were surveyed. The timely vaccination coverage rate was 58.4% (95% CI, 54.9%–62.0% for MCV1, and 76.9% (95% CI, 73.6%–80.0% for MCV2. The timely-and-complete vaccination coverage rate was 47.4% (95% CI, 44.0%–51.0%. The median delay period was 32 (95% CI, 27–38 days for MCV1, and 159 (95% CI, 118–195 days for MCV2. Conclusions: The timeliness and completeness of measles vaccination was low, and the median delay period was long among children in rural Guangxi. Incorporating the timeliness and completeness into official routine vaccination coverage statistics may help appraise the coverage of vaccination in China.

  1. Physician peer group characteristics and timeliness of breast cancer surgery.

    Science.gov (United States)

    Bachand, Jacqueline; Soulos, Pamela R; Herrin, Jeph; Pollack, Craig E; Xu, Xiao; Ma, Xiaomei; Gross, Cary P

    2018-04-24

    Little is known about how the structure of interdisciplinary groups of physicians affects the timeliness of breast cancer surgery their patients receive. We used social network methods to examine variation in surgical delay across physician peer groups and the association of this delay with group characteristics. We used linked Surveillance, Epidemiology, and End Results-Medicare data to construct physician peer groups based on shared breast cancer patients. We used hierarchical generalized linear models to examine the association of three group characteristics, patient racial composition, provider density (the ratio of potential vs. actual connections between physicians), and provider transitivity (clustering of providers within groups), with delayed surgery. The study sample included 8338 women with breast cancer in 157 physician peer groups. Surgical delay varied widely across physician peer groups (interquartile range 28.2-50.0%). For every 10% increase in the percentage of black patients in a peer group, there was a 41% increase in the odds of delayed surgery for women in that peer group regardless of a patient's own race [odds ratio (OR) 1.41, 95% confidence interval (CI) 1.15-1.73]. Women in physician peer groups with the highest provider density were less likely to receive delayed surgery than those in physician peer groups with the lowest provider density (OR 0.65, 95% CI 0.44-0.98). We did not find an association between provider transitivity and delayed surgery. The likelihood of surgical delay varied substantially across physician peer groups and was associated with provider density and patient racial composition.

  2. Inferring feature relevances from metric learning

    DEFF Research Database (Denmark)

    Schulz, Alexander; Mokbel, Bassam; Biehl, Michael

    2015-01-01

    Powerful metric learning algorithms have been proposed in the last years which do not only greatly enhance the accuracy of distance-based classifiers and nearest neighbor database retrieval, but which also enable the interpretability of these operations by assigning explicit relevance weights...

  3. Improving the timeliness and accuracy of injury severity data in road traffic accidents in an emerging economy setting.

    Science.gov (United States)

    Lam, Carlos; Chen, Chang-I; Chuang, Chia-Chang; Wu, Chia-Chieh; Yu, Shih-Hsiang; Chang, Kai-Kuo; Chiu, Wen-Ta

    2018-05-18

    Road traffic injuries (RTIs) are among the leading causes of injury and fatality worldwide. RTI casualties are continually increasing in Taiwan; however, because of a lack of an advanced method for classifying RTI severity data, as well as the fragmentation of data sources, road traffic safety and health agencies encounter difficulties in analyzing RTIs and their burden on the healthcare system and national resources. These difficulties lead to blind spots during policy-making for RTI prevention and control. After compiling classifications applied in various countries, we summarized data sources for RTI severity in Taiwan, through which we identified data fragmentation. Accordingly, we proposed a practical classification for RTI severity, as well as a feasible model for collecting and integrating these data nationwide. This model can provide timely relevant data recorded by medical professionals and is valuable to healthcare providers. The proposed model's pros and cons are also compared to those of other current models.

  4. Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.

    Science.gov (United States)

    Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin

    2010-04-16

    Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.

  5. Analytical Performance Requirements for Systems for Self-Monitoring of Blood Glucose With Focus on System Accuracy: Relevant Differences Among ISO 15197:2003, ISO 15197:2013, and Current FDA Recommendations.

    Science.gov (United States)

    Freckmann, Guido; Schmid, Christina; Baumstark, Annette; Rutschmann, Malte; Haug, Cornelia; Heinemann, Lutz

    2015-07-01

    In the European Union (EU), the ISO (International Organization for Standardization) 15197 standard is applicable for the evaluation of systems for self-monitoring of blood glucose (SMBG) before the market approval. In 2013, a revised version of this standard was published. Relevant revisions in the analytical performance requirements are the inclusion of the evaluation of influence quantities, for example, hematocrit, and some changes in the testing procedures for measurement precision and system accuracy evaluation, for example, number of test strip lots. Regarding system accuracy evaluation, the most important change is the inclusion of more stringent accuracy criteria. In 2014, the Food and Drug Administration (FDA) in the United States published their own guidance document for the premarket evaluation of SMBG systems with even more stringent system accuracy criteria than stipulated by ISO 15197:2013. The establishment of strict accuracy criteria applicable for the premarket evaluation is a possible approach to further improve the measurement quality of SMBG systems. However, the system accuracy testing procedure is quite complex, and some critical aspects, for example, systematic measurement difference between the reference measurement procedure and a higher-order procedure, may potentially limit the apparent accuracy of a given system. Therefore, the implementation of a harmonized reference measurement procedure for which traceability to standards of higher order is verified through an unbroken, documented chain of calibrations is desirable. In addition, the establishment of regular and standardized post-marketing evaluations of distributed test strip lots should be considered as an approach toward an improved measurement quality of available SMBG systems. © 2015 Diabetes Technology Society.

  6. RPAS ACCURACY TESTING FOR USING IT IN THE CADASTRE OF REAL ESTATES OF THE CZECH REPUBLIC

    Directory of Open Access Journals (Sweden)

    E. Housarová

    2016-06-01

    Full Text Available In the last few years, interest in the collection of data using remotely piloted aircraft systems (RPAS has sharply risen. RPAS technology has a very wide area of use; one of its main advantages is its accuracy, timeliness of data, frequency of collecting data and low operating costs. RPAS can be used for the mapping of small, dangerous and inaccessible areas in contrast with ordinary aerial photogrammetry. In the cadastre of real estates of the Czech Republic, it is possible to map out areas by using aerial photogrammetry, so it has been done in the past. However, this is a relatively expensive and complex technology, and therefore we are looking for new alternatives. An alternative would be to use RPAS technology for data acquisition. The testing of the possibility of using RPAS for the cadastre of real estates of the Czech Republic is the subject of this paper. When evaluating results we compared point coordinates measured by geodetic method, GNSS technology and RPAS technology.

  7. SISTEM INFORMASI DILIHAT DARI ASPEK KUALITAS INFORMASI AKUNTANSI MANAJEMEN

    Directory of Open Access Journals (Sweden)

    Rima Rachmawati

    2016-08-01

    Full Text Available Abstract. Management accounting information is generated by management accounting information systems. A quality information system capable of producing quality management accounting information. Information systems are measured by attributes; Integration, flexibility, accessibility, formalization and richness media. While management accounting information is measured by attribute; Scope, timeliness, accuracy, format and relevancy. This study aims to measure how much influence the quality of management accounting information systems to the quality of management accounting information. This research uses survey method, is descriptive and verification. Unit of analysis at the ITB Personnel Directorate. Data analysis using regression equation. The results showed that the quality of management accounting information systems affect the quality of management accounting information size variability of management accounting information quality is explained by the variable quality of management accounting information system of 98.6%. Keyword: Accounting Information Systems; Management Accounting Information; Management Accounting   Abstrak. Informasi akuntansi manajemen dihasilkan oleh sistem informasi akuntansi manajemen. Sistem informasi yang berkualitas yang mampu menghasilkan informasi akuntansi manajemen yang berkualitas. Sistem informasi diukur dengan atribut: integration, flexibility, accessibility, formalization dan media richness. Sedangkan informasi akuntansi manajemen diukur dengan atribut: scope, timeliness, accuracy, format dan relevancy. Penelitian ini bertujuan untuk mengukur seberapa besar pengaruh kualitas sistem informasi akuntansi manajemen terhadap kualitas informasi akuntansi manajemen. Penelitian ini menggunakan metode survei, bersifat deskriptif dan verifikatif. Unit analisis pada Direktorat Kepegawaian ITB. Analisis data menggunakan persamaan regresi. Hasil penelitian menunjukkan bahwa kualitas sistem informasi akuntansi

  8. Relevant test set using feature selection algorithm for early detection ...

    African Journals Online (AJOL)

    The objective of feature selection is to find the most relevant features for classification. Thus, the dimensionality of the information will be reduced and may improve classification's accuracy. This paper proposed a minimum set of relevant questions that can be used for early detection of dyslexia. In this research, we ...

  9. An Architecture for Improving Timeliness and Relevance of Cyber Incident Notifications

    Science.gov (United States)

    2011-03-01

    the difference between a beginning chess player, an experienced amateur, and a grand master. The beginner sees what his opponent is doing, but is...supplemented sparingly with traditional flowcharts where 69 additional detail is desired. These five are a Use Case diagram, a Class diagram...Figure 35 provides a flowchart example of this process. Obtain current ASF or MSF timestamp Count Dependencies Have all dependencies been checked

  10. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    International Nuclear Information System (INIS)

    Ma, T; Kumaraswamy, L

    2015-01-01

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10 CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect

  11. Perceived timeliness of referral to hospice palliative care among bereaved family members in Korea.

    Science.gov (United States)

    Jho, Hyun Jung; Chang, Yoon Jung; Song, Hye Young; Choi, Jin Young; Kim, Yeol; Park, Eun Jung; Paek, Soo Jin; Choi, Hee Jae

    2015-09-01

    We aimed to explore the perceived timeliness of referral to hospice palliative care unit (HPCU) among bereaved family members in Korea and factors associated therewith. Cross-sectional questionnaire survey was performed for bereaved family members of patients who utilized 40 designated HPCUs across Korea. The questionnaire assessed whether admission to the HPCU was "too late" or "appropriate" and the Good Death Inventory (GDI). A total of 383 questionnaires were analyzed. Of participants, 25.8 % replied that admission to HPCU was too late. Patients with hepatobiliary cancer, poor performance status, abnormal consciousness level, and unawareness of terminal status were significantly related with the too late perception. Family members with younger age and being a child of the patient were more frequently noted in the too late group. Ten out of 18 GDI scores were significantly lower in the too late group. Multiple logistic regression analysis revealed patients' unawareness of terminal status, shorter stay in the HPCU, younger age of bereaved family, and lower scores for two GDI items (staying in a favored place, living without concerning death or disease) were significantly associated with the too late group. To promote timely HPCU utilization and better quality of end of life care, patients need to be informed of the terminal status and their preference should be respected.

  12. Utility and potential of rapid epidemic intelligence from internet-based sources.

    Science.gov (United States)

    Yan, S J; Chughtai, A A; Macintyre, C R

    2017-10-01

    Rapid epidemic detection is an important objective of surveillance to enable timely intervention, but traditional validated surveillance data may not be available in the required timeframe for acute epidemic control. Increasing volumes of data on the Internet have prompted interest in methods that could use unstructured sources to enhance traditional disease surveillance and gain rapid epidemic intelligence. We aimed to summarise Internet-based methods that use freely-accessible, unstructured data for epidemic surveillance and explore their timeliness and accuracy outcomes. Steps outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist were used to guide a systematic review of research related to the use of informal or unstructured data by Internet-based intelligence methods for surveillance. We identified 84 articles published between 2006-2016 relating to Internet-based public health surveillance methods. Studies used search queries, social media posts and approaches derived from existing Internet-based systems for early epidemic alerts and real-time monitoring. Most studies noted improved timeliness compared to official reporting, such as in the 2014 Ebola epidemic where epidemic alerts were generated first from ProMED-mail. Internet-based methods showed variable correlation strength with official datasets, with some methods showing reasonable accuracy. The proliferation of publicly available information on the Internet provided a new avenue for epidemic intelligence. Methodologies have been developed to collect Internet data and some systems are already used to enhance the timeliness of traditional surveillance systems. To improve the utility of Internet-based systems, the key attributes of timeliness and data accuracy should be included in future evaluations of surveillance systems. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Timeliness of Operating Room Case Planning and Time Utilization: Influence of First and To-Follow Cases

    Directory of Open Access Journals (Sweden)

    Konrad Meissner

    2017-04-01

    Full Text Available Resource and cost constraints in hospitals demand thorough planning of operating room schedules. Ideally, exact start times and durations are known in advance for each case. However, aside from the first case’s start, most factors are hard to predict. While the role of the start of the first case for optimal room utilization has been shown before, data for to-follow cases are lacking. The present study therefore aimed to analyze all elective surgery cases of a university hospital within 1 year in search of visible patterns. A total of 14,014 cases scheduled on 254 regular working days at a university hospital between September 2015 and August 2016 underwent screening. After eliminating 112 emergencies during regular working hours, 13,547 elective daytime cases were analyzed, out of which 4,346 ranked first, 3,723 second, and 5,478 third or higher in the daily schedule. Also, 36% of cases changed start times from the day before to 7:00 a.m., with half of these (52% resulting in a delay of more than 15 min. After 7:00 a.m., 87% of cases started more than 10 min off schedule, with 26% being early and 74% late. Timeliness was 15 ± 72 min (mean ± SD for first, 21 ± 84 min for second, and 25 ± 93 min for all to-follow cases, compared to preoperative day planning, and 21 ± 45, 23 ± 61, and 19 ± 74 min compared to 7:00 a.m. status. Start time deviations were also related to procedure duration, with cases of 61–90 min duration being most reliable (deviation 9.8 ± 67 min compared to 7:00 a.m., regardless of order. In consequence, cases following after 61–90 min long cases had the shortest deviations of incision time from schedule (16 ± 66 min. Taken together, start times for elective surgery cases deviate substantially from schedule, with first and second cases falling into the highest mean deviation category. Second cases had the largest deviations from scheduled times compared to

  14. Seeking kinetic pathways relevant to the structural evolution of metal nanoparticles

    International Nuclear Information System (INIS)

    Haldar, Paramita; Chatterjee, Abhijit

    2015-01-01

    Understanding the kinetic pathways that cause metal nanoparticles to structurally evolve over time is essential for predicting their shape and size distributions and catalytic properties. Consequently, we need detailed kinetic models that can provide such information. Most kinetic Monte Carlo models used for metal systems contain a fixed catalogue of atomic moves; the catalogue is largely constructed based on our physical understanding of the material. In some situations, it is possible that an incorrect picture of the overall dynamics is obtained when kinetic pathways that are relevant to the dynamics are missing from the catalogue. Hence, a computational framework that can systematically determine the relevant pathways is required. This work intends to fulfil this requirement. Examples involving an Ag nanoparticle are studied to illustrate how molecular dynamics (MD) calculations can be employed to find the relevant pathways in a system. Since pathways that are unlikely to be selected at short timescales can become relevant at longer times, the accuracy of the catalogue is maintained by continually seeking these pathways using MD. We discuss various aspects of our approach, namely, defining the relevance of atomic moves to the dynamics and determining when additional MD is required to ensure the desired accuracy, as well as physical insights into the Ag nanoparticle. (paper)

  15. Impact of electronic order management on the timeliness of antibiotic administration in critical care patients.

    Science.gov (United States)

    Cartmill, Randi S; Walker, James M; Blosky, Mary Ann; Brown, Roger L; Djurkovic, Svetolik; Dunham, Deborah B; Gardill, Debra; Haupt, Marilyn T; Parry, Dean; Wetterneck, Tosha B; Wood, Kenneth E; Carayon, Pascale

    2012-11-01

    To examine the effect of implementing electronic order management on the timely administration of antibiotics to critical-care patients. We used a prospective pre-post design, collecting data on first-dose IV antibiotic orders before and after the implementation of an integrated electronic medication-management system, which included computerized provider order entry (CPOE), pharmacy order processing and an electronic medication administration record (eMAR). The research was performed in a 24-bed adult medical/surgical ICU in a large, rural, tertiary medical center. Data on the time of ordering, pharmacy processing and administration were prospectively collected and time intervals for each stage and the overall process were calculated. The overall turnaround time from ordering to administration significantly decreased from a median of 100 min before order management implementation to a median of 64 min after implementation. The first part of the medication use process, i.e., from order entry to pharmacy processing, improved significantly whereas no change was observed in the phase from pharmacy processing to medication administration. The implementation of an electronic order-management system improved the timeliness of antibiotic administration to critical-care patients. Additional system changes are required to further decrease the turnaround time. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Flight Crew State Monitoring Metrics, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — eSky will develop specific crew state metrics based on the timeliness, tempo and accuracy of pilot inputs required by the H-mode Flight Control System (HFCS)....

  17. What do we mean by accuracy in geomagnetic measurements?

    Science.gov (United States)

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  18. Unfamiliar voice identification: Effect of post-event information on accuracy and voice ratings

    Directory of Open Access Journals (Sweden)

    Harriet Mary Jessica Smith

    2014-04-01

    Full Text Available This study addressed the effect of misleading post-event information (PEI on voice ratings, identification accuracy, and confidence, as well as the link between verbal recall and accuracy. Participants listened to a dialogue between male and female targets, then read misleading information about voice pitch. Participants engaged in verbal recall, rated voices on a feature checklist, and made a lineup decision. Accuracy rates were low, especially on target-absent lineups. Confidence and accuracy were unrelated, but the number of facts recalled about the voice predicted later lineup accuracy. There was a main effect of misinformation on ratings of target voice pitch, but there was no effect on identification accuracy or confidence ratings. As voice lineup evidence from earwitnesses is used in courts, the findings have potential applied relevance.

  19. Right care, right place, right time: improving the timeliness of health care in New South Wales through a public-private hospital partnership.

    Science.gov (United States)

    Saunders, Carla; Carter, David J

    2017-10-01

    Objective The overall aim of the study was to investigate and assess the feasibility of improving the timeliness of public hospital care through a New South Wales (NSW)-wide public-private hospital partnership. Methods The study reviewed the academic and professional grey literature, and undertook exploratory analyses of secondary data acquired from two national health data repositories informing in-patient access and utilisation across NSW public and private hospitals. Results In 2014-15, the NSW public hospital system was unable to deliver care within the medically recommended time frame for over 27400 people who were awaiting elective surgery. Available information indicates that the annual commissioning of 15% of public in-patient rehabilitation bed days to the private hospital system would potentially free up enough capacity in the NSW public hospital system to enable elective surgery for all public patients within recommended time frames. Conclusions The findings of the study justify a strategic whole-of-health system approach to reducing public patient wait times in NSW and highlight the need for research efforts aimed at securing a better understanding of available hospital capacity across the public and private hospital systems, and identifying and testing workable models that improve the timeliness of public hospital care. What is known about the topic? There are very few studies available to inform public-private hospital service partnerships and the opportunities available to improve timely health care access through such partnerships. What does this paper add? This paper has the potential to open and prompt timely discussion and debate, and generate further fundamental investigation, on public-private hospital service partnerships in Australia where opportunity is available to address elective surgery wait times in a reliable and effective manner. What are the implications for practitioners? The NSW Ministry of Health and its Local Health Districts

  20. The Difference between Right and Wrong: Accuracy of Older and Younger Adults’ Story Recall

    Science.gov (United States)

    Davis, Danielle K.; Alea, Nicole; Bluck, Susan

    2015-01-01

    Sharing stories is an important social activity in everyday life. This study used fine-grained content analysis to investigate the accuracy of recall of two central story elements: the gist and detail of socially-relevant stories. Younger (M age = 28.06) and older (M age = 75.03) American men and women (N = 63) recalled fictional stories that were coded for (i) accuracy of overall gist and specific gist categories and (ii) accuracy of overall detail and specific detail categories. Findings showed no age group differences in accuracy of overall gist or detail, but differences emerged for specific categories. Older adults more accurately recalled the gist of when the event occurred whereas younger adults more accurately recalled the gist of why the event occurred. These differences were related to episodic memory ability and education. For accuracy in recalling details, there were some age differences, but gender differences were more robust. Overall, women remembered details of these social stories more accurately than men, particularly time and perceptual details. Women were also more likely to accurately remember the gist of when the event occurred. The discussion focuses on how accurate recall of socially-relevant stories is not clearly age-dependent but is related to person characteristics such as gender and episodic memory ability/education. PMID:26404344

  1. The Difference between Right and Wrong: Accuracy of Older and Younger Adults' Story Recall.

    Science.gov (United States)

    Davis, Danielle K; Alea, Nicole; Bluck, Susan

    2015-09-02

    Sharing stories is an important social activity in everyday life. This study used fine-grained content analysis to investigate the accuracy of recall of two central story elements: the gist and detail of socially-relevant stories. Younger (M age = 28.06) and older (M age = 75.03) American men and women (N = 63) recalled fictional stories that were coded for (i) accuracy of overall gist and specific gist categories and (ii) accuracy of overall detail and specific detail categories. Findings showed no age group differences in accuracy of overall gist or detail, but differences emerged for specific categories. Older adults more accurately recalled the gist of when the event occurred whereas younger adults more accurately recalled the gist of why the event occurred. These differences were related to episodic memory ability and education. For accuracy in recalling details, there were some age differences, but gender differences were more robust. Overall, women remembered details of these social stories more accurately than men, particularly time and perceptual details. Women were also more likely to accurately remember the gist of when the event occurred. The discussion focuses on how accurate recall of socially-relevant stories is not clearly age-dependent but is related to person characteristics such as gender and episodic memory ability/education.

  2. The Difference between Right and Wrong: Accuracy of Older and Younger Adults’ Story Recall

    Directory of Open Access Journals (Sweden)

    Danielle K. Davis

    2015-09-01

    Full Text Available Sharing stories is an important social activity in everyday life. This study used fine-grained content analysis to investigate the accuracy of recall of two central story elements: the gist and detail of socially-relevant stories. Younger (M age = 28.06 and older (M age = 75.03 American men and women (N = 63 recalled fictional stories that were coded for (i accuracy of overall gist and specific gist categories and (ii accuracy of overall detail and specific detail categories. Findings showed no age group differences in accuracy of overall gist or detail, but differences emerged for specific categories. Older adults more accurately recalled the gist of when the event occurred whereas younger adults more accurately recalled the gist of why the event occurred. These differences were related to episodic memory ability and education. For accuracy in recalling details, there were some age differences, but gender differences were more robust. Overall, women remembered details of these social stories more accurately than men, particularly time and perceptual details. Women were also more likely to accurately remember the gist of when the event occurred. The discussion focuses on how accurate recall of socially-relevant stories is not clearly age-dependent but is related to person characteristics such as gender and episodic memory ability/education.

  3. Reference radiology in nephroblastoma: accuracy and relevance for preoperative chemotherapy

    International Nuclear Information System (INIS)

    Schenk, J.P.; Schrader, C.; Zieger, B.; Ley, S.; Troeger, J.; Furtwaengler, R.; Graf, N.; Leuschner, I.

    2006-01-01

    Purpose: A reference radiologic diagnosis was carried out for the purpose of quality control and in order to achieve high diagnostic accuracy in the ongoing trial and study SIOP 2001/GPOH for renal tumors during childhood. The aim of the present study is to evaluate the value of diagnostic imaging and the benefit of reference evaluation at a pediatric radiology center. Materials and Methods: In 2004 the imaging studies of 97 patients suspected of having a renal tumor were presented at the beginning of therapy. Diagnostic imaging was compared to the primary imaging results and the histological findings and was analyzed in regard to the therapeutic consequence (primary chemotherapy without prior histology). 77 MRI, 35 CT and 67 ultrasound examinations of 47 girls and 50 boys (mean age 4 years; one day to 15.87 years old) were analyzed. In addition to the histological findings, the reference pathological results were submitted in 86 cases. Results from the primary imaging corresponding to the histology and results from the reference radiology corresponding to the histology and results from the reference radiology corresponding to the histology were statistically compared in a binomial test. Results: In 76 of the reference-diagnosed Wilms' tumors, 67 were confirmed histologically. In 72 cases preoperative chemotherapy was initiated. In 5 cases neither a Wilms' tumor nor a nephroblastomatosis was found. 16 of 21 cases (76%) with reference-diagnosed non-Wilms' tumors were selected correctly. The results of the primary imaging corresponded to the histology in 71 cases, and those of the reference radiology in 82 cases. The statistical evaluation showed that the results of the reference radiology were significantly better (p=0.03971). (orig.)

  4. High accuracy wavelength calibration for a scanning visible spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Scotti, Filippo; Bell, Ronald E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  5. Diagnostic accuracy of postmortem imaging vs autopsy—A systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, Anders, E-mail: anders.eriksson@rmv.se [Section of Forensic Medicine, Dept of Community Medicine and Rehabilitation, Umeå University, PO Box 7016, SE-907 12 Umeå (Sweden); Gustafsson, Torfinn [Section of Forensic Medicine, Dept of Community Medicine and Rehabilitation, Umeå University, PO Box 7016, SE-907 12 Umeå (Sweden); Höistad, Malin; Hultcrantz, Monica [Swedish Agency for Health Technology Assessment and Assessment of Social Services, PO Box 3657, SE-103 59 Stockholm (Sweden); Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, SE-171 77 Stockholm (Sweden); Jacobson, Stella; Mejare, Ingegerd [Swedish Agency for Health Technology Assessment and Assessment of Social Services, PO Box 3657, SE-103 59 Stockholm (Sweden); Persson, Anders [Department of Medical and Health Sciences, Center for Medical Image Science and Visualization (CMIV), Linköping University, SE-581 85, Linköping Sweden (Sweden)

    2017-04-15

    Highlights: • The search generated 340 possibly relevant publications, of which 49 were assessed as having high risk of bias and 22 as moderate risk. • Due to considerable heterogeneity of included studies it was impossible to estimate the diagnostic accuracy of the various findings. • Future studies need larger materials and improved planning and methodological quality, preferentially from multi-center studies. - Abstract: Background Postmortem imaging has been used for more than a century as a complement to medico-legal autopsies. The technique has also emerged as a possible alternative to compensate for the continuous decline in the number of clinical autopsies. To evaluate the diagnostic accuracy of postmortem imaging for various types of findings, we performed this systematic literature review. Data sources The literature search was performed in the databases PubMed, Embase and Cochrane Library through January 7, 2015. Relevant publications were assessed for risk of bias using the QUADAS tool and were classified as low, moderate or high risk of bias according to pre-defined criteria. Autopsy and/or histopathology were used as reference standard. Findings The search generated 2600 abstracts, of which 340 were assessed as possibly relevant and read in full-text. After further evaluation 71 studies were finally included, of which 49 were assessed as having high risk of bias and 22 as moderate risk of bias. Due to considerable heterogeneity – in populations, techniques, analyses and reporting – of included studies it was impossible to combine data to get a summary estimate of the diagnostic accuracy of the various findings. Individual studies indicate, however, that imaging techniques might be useful for determining organ weights, and that the techniques seem superior to autopsy for detecting gas Conclusions and Implications In general, based on the current scientific literature, it was not possible to determine the diagnostic accuracy of postmortem

  6. Diagnostic accuracy of postmortem imaging vs autopsy—A systematic review

    International Nuclear Information System (INIS)

    Eriksson, Anders; Gustafsson, Torfinn; Höistad, Malin; Hultcrantz, Monica; Jacobson, Stella; Mejare, Ingegerd; Persson, Anders

    2017-01-01

    Highlights: • The search generated 340 possibly relevant publications, of which 49 were assessed as having high risk of bias and 22 as moderate risk. • Due to considerable heterogeneity of included studies it was impossible to estimate the diagnostic accuracy of the various findings. • Future studies need larger materials and improved planning and methodological quality, preferentially from multi-center studies. - Abstract: Background Postmortem imaging has been used for more than a century as a complement to medico-legal autopsies. The technique has also emerged as a possible alternative to compensate for the continuous decline in the number of clinical autopsies. To evaluate the diagnostic accuracy of postmortem imaging for various types of findings, we performed this systematic literature review. Data sources The literature search was performed in the databases PubMed, Embase and Cochrane Library through January 7, 2015. Relevant publications were assessed for risk of bias using the QUADAS tool and were classified as low, moderate or high risk of bias according to pre-defined criteria. Autopsy and/or histopathology were used as reference standard. Findings The search generated 2600 abstracts, of which 340 were assessed as possibly relevant and read in full-text. After further evaluation 71 studies were finally included, of which 49 were assessed as having high risk of bias and 22 as moderate risk of bias. Due to considerable heterogeneity – in populations, techniques, analyses and reporting – of included studies it was impossible to combine data to get a summary estimate of the diagnostic accuracy of the various findings. Individual studies indicate, however, that imaging techniques might be useful for determining organ weights, and that the techniques seem superior to autopsy for detecting gas Conclusions and Implications In general, based on the current scientific literature, it was not possible to determine the diagnostic accuracy of postmortem

  7. Accuracy and precision of oscillometric blood pressure in standing conscious horses

    DEFF Research Database (Denmark)

    Olsen, Emil; Pedersen, Tilde Louise Skovgaard; Robinson, Rebecca

    2016-01-01

    from a teaching and research herd. HYPOTHESIS/OBJECTIVE: To evaluate the accuracy and precision of systolic arterial pressure (SAP), diastolic arterial pressure (DAP), and mean arterial pressure (MAP) in conscious horses obtained with an oscillometric NIBP device when compared to invasively measured...... administration. Agreement analysis with replicate measures was utilized to calculate bias (accuracy) and standard deviation (SD) of bias (precision). RESULTS: A total of 252 pairs of invasive arterial BP and NIBP measurements were analyzed. Compared to the direct BP measures, the NIBP MAP had an accuracy of -4...... mm Hg and precision of 10 mm Hg. SAP had an accuracy of -8 mm Hg and a precision of 17 mm Hg and DAP had an accuracy of -7 mm Hg and a precision of 14 mm Hg. CONCLUSIONS AND CLINICAL RELEVANCE: MAP from the evaluated NIBP monitor is accurate and precise in the adult horse across a range of BP...

  8. BRIEF REPORT: Beyond Clinical Experience: Features of Data Collection and Interpretation That Contribute to Diagnostic Accuracy

    Science.gov (United States)

    Nendaz, Mathieu R; Gut, Anne M; Perrier, Arnaud; Louis-Simonet, Martine; Blondon-Choa, Katherine; Herrmann, François R; Junod, Alain F; Vu, Nu V

    2006-01-01

    BACKGROUND Clinical experience, features of data collection process, or both, affect diagnostic accuracy, but their respective role is unclear. OBJECTIVE, DESIGN Prospective, observational study, to determine the respective contribution of clinical experience and data collection features to diagnostic accuracy. METHODS Six Internists, 6 second year internal medicine residents, and 6 senior medical students worked up the same 7 cases with a standardized patient. Each encounter was audiotaped and immediately assessed by the subjects who indicated the reasons underlying their data collection. We analyzed the encounters according to diagnostic accuracy, information collected, organ systems explored, diagnoses evaluated, and final decisions made, and we determined predictors of diagnostic accuracy by logistic regression models. RESULTS Several features significantly predicted diagnostic accuracy after correction for clinical experience: early exploration of correct diagnosis (odds ratio [OR] 24.35) or of relevant diagnostic hypotheses (OR 2.22) to frame clinical data collection, larger number of diagnostic hypotheses evaluated (OR 1.08), and collection of relevant clinical data (OR 1.19). CONCLUSION Some features of data collection and interpretation are related to diagnostic accuracy beyond clinical experience and should be explicitly included in clinical training and modeled by clinical teachers. Thoroughness in data collection should not be considered a privileged way to diagnostic success. PMID:17105525

  9. A Sentiment-Enhanced Hybrid Recommender System for Movie Recommendation: A Big Data Analytics Framework

    Directory of Open Access Journals (Sweden)

    Yibo Wang

    2018-01-01

    Full Text Available Movie recommendation in mobile environment is critically important for mobile users. It carries out comprehensive aggregation of user’s preferences, reviews, and emotions to help them find suitable movies conveniently. However, it requires both accuracy and timeliness. In this paper, a movie recommendation framework based on a hybrid recommendation model and sentiment analysis on Spark platform is proposed to improve the accuracy and timeliness of mobile movie recommender system. In the proposed approach, we first use a hybrid recommendation method to generate a preliminary recommendation list. Then sentiment analysis is employed to optimize the list. Finally, the hybrid recommender system with sentiment analysis is implemented on Spark platform. The hybrid recommendation model with sentiment analysis outperforms the traditional models in terms of various evaluation criteria. Our proposed method makes it convenient and fast for users to obtain useful movie suggestions.

  10. Identification of children with reading difficulties: Cheap can be adequate

    DEFF Research Database (Denmark)

    Poulsen, Mads; Nielsen, Anne-Mette Veber

    Classification of reading difficulties: Cheap screening can be accurate Purpose: Three factors are important for identification of students in need of remedial instruction: accuracy, timeliness, and cost. The identification has to be accurate to be of any use, the identification has to be timely......, inexpensive testing. The present study investigated the classification accuracy of three screening models varying in timeliness and cost. Method: We compared the ROC statistics of three logistic models for predicting end of Grade 2 reading difficulties in a sample of 164 students: 1) an early, comprehensive...... model using a battery of Grade 0 tests, including phoneme awareness, rapid naming, and paired associate learning, 2) a late, comprehensive model adding reading measures from January of Grade 1, and 3) a late, inexpensive model using only group-administered reading measures from January of Grade 1...

  11. Key Performance Indicators and Analysts' Earnings Forecast Accuracy: An Application of Content Analysis

    OpenAIRE

    Alireza Dorestani; Zabihollah Rezaee

    2011-01-01

    We examine the association between the extent of change in key performance indicator (KPI) disclosures and the accuracy of forecasts made by analysts. KPIs are regarded as improving both the transparency and relevancy of public financial information. The results of using linear regression models show that contrary to our prediction and the hypothesis of this paper, there is no significant association between the change in non- financial KPI disclosures and the accuracy of analysts' forecasts....

  12. Safeguards needs in the measurement area: the realm of measurements

    International Nuclear Information System (INIS)

    Hammond, G.; Auerbach, C.

    1978-01-01

    An effective safeguards measurement system must cover a multitude of material forms ranging from essentially pure substances to highly heterogeneous materials. In addition there are varied and sometimes conflicting demands for accuracy and timeliness. Consequently, a judicious and systematic choice must be made between methods based on sampling followed by chemical analysis or nondestructive methods based on nuclear properties. Fundamental advances in analytical chemistry made during the year preceding World War II enabled Manhattan Project scientists to develop methods which contributed to the success of both the immediate goal and the developments which have taken place since. Examples are given of evolutionary developments in the direction of timeliness through varying degrees of automation. Nondestructive methods, first introduced because of the need to measure scrap and other intractable material, are finding broader areas of application. Aided by DOE-sponsored research and development, new techniques providing greater accuracy, versatility and timeliness are being introduced. It is now recognized that an effective safeguards measurement system must make concerted use of both chemical and nondestructive methods. Recent studies have fostered understanding of the relative importance of various process streams in the material balance equations and have highlighted the need for a systematic approach to measurement solutions for safeguarding nuclear materials

  13. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection

    Science.gov (United States)

    Methods are needed improve the timeliness and accuracy of recreational water‐quality assessments. Traditional culture methods require 18–24 h to obtain results and may not reflect current conditions. Predictive models, based on environmental and water quality variables, have been...

  14. Collective animal decisions: preference conflict and decision accuracy.

    Science.gov (United States)

    Conradt, Larissa

    2013-12-06

    Social animals frequently share decisions that involve uncertainty and conflict. It has been suggested that conflict can enhance decision accuracy. In order to judge the practical relevance of such a suggestion, it is necessary to explore how general such findings are. Using a model, I examine whether conflicts between animals in a group with respect to preferences for avoiding false positives versus avoiding false negatives could, in principle, enhance the accuracy of collective decisions. I found that decision accuracy nearly always peaked when there was maximum conflict in groups in which individuals had different preferences. However, groups with no preferences were usually even more accurate. Furthermore, a relatively slight skew towards more animals with a preference for avoiding false negatives decreased the rate of expected false negatives versus false positives considerably (and vice versa), while resulting in only a small loss of decision accuracy. I conclude that in ecological situations in which decision accuracy is crucial for fitness and survival, animals cannot 'afford' preferences with respect to avoiding false positives versus false negatives. When decision accuracy is less crucial, animals might have such preferences. A slight skew in the number of animals with different preferences will result in the group avoiding that type of error more that the majority of group members prefers to avoid. The model also indicated that knowing the average success rate ('base rate') of a decision option can be very misleading, and that animals should ignore such base rates unless further information is available.

  15. Kernel-Based Relevance Analysis with Enhanced Interpretability for Detection of Brain Activity Patterns

    Directory of Open Access Journals (Sweden)

    Andres M. Alvarez-Meza

    2017-10-01

    Full Text Available We introduce Enhanced Kernel-based Relevance Analysis (EKRA that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand.

  16. The impact of an early-morning radiologist work shift on the timeliness of communicating urgent imaging findings on portable chest radiography.

    Science.gov (United States)

    Kaewlai, Rathachai; Greene, Reginald E; Asrani, Ashwin V; Abujudeh, Hani H

    2010-09-01

    The aim of this study was to assess the potential impact of staggered radiologist work shifts on the timeliness of communicating urgent imaging findings that are detected on portable overnight chest radiography of hospitalized patients. The authors conducted a retrospective study that compared the interval between the acquisition and communication of urgent findings on portable overnight critical care chest radiography detected by an early-morning shift for radiologists (3 am to 11 am) with historical experience with a standard daytime shift (8 am to 5 pm) in the detection and communication of urgent findings in a similar patient population a year earlier. During a 4-month period, 6,448 portable chest radiographic studies were interpreted on the early-morning radiologist shift. Urgent findings requiring immediate communication were detected in 308 (4.8%) studies. The early-morning shift of radiologists, on average, communicated these findings 2 hours earlier compared with the historical control group (P chest radiography of hospitalized patients. Published by Elsevier Inc.

  17. Achieving Climate Change Absolute Accuracy in Orbit

    Science.gov (United States)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; hide

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  18. The effect of pre-existing mental health comorbidities on the stage at diagnosis and timeliness of care of solid tumor malignances in a Veterans Affairs (VA) medical center

    International Nuclear Information System (INIS)

    Wadia, Roxanne J; Yao, Xiaopan; Deng, Yanhong; Li, Jia; Maron, Steven; Connery, Donna; Gunduz-Bruce, Handan; Rose, Michal G

    2015-01-01

    There are limited data on the impact of mental health comorbidities (MHC) on stage at diagnosis and timeliness of cancer care. Axis I MHC affect approximately 30% of Veterans receiving care within the Veterans Affairs (VA) system. The purpose of this study was to compare stage at diagnosis and timeliness of care of solid tumor malignancies among Veterans with and without MHC. We performed a retrospective analysis of 408 charts of Veterans with colorectal, urothelial, and head/neck cancer diagnosed and treated at VA Connecticut Health Care System (VACHS) between 2008 and 2011. We collected demographic data, stage at diagnosis, medical and mental health co-morbidities, treatments received, key time intervals, and number of appointments missed. The study was powered to assess for stage migration of 15–20% from Stage I/II to Stage III/IV. There was no significant change in stage distribution for patients with and without MHC in the entire study group (p = 0.9442) and in each individual tumor type. There were no significant differences in the time intervals from onset of symptoms to initiation of treatment between patients with and without MHC (p = 0.1135, 0.2042 and 0.2352, respectively). We conclude that at VACHS, stage at diagnosis for patients with colorectal, urothelial and head and neck cancers did not differ significantly between patients with and without MHC. Patients with MHC did not experience significant delays in care. Our study indicates that in a medical system in which mental health is integrated into routine care, patients with Axis I MHC do not experience delays in cancer care

  19. Timeliness of abnormal screening and diagnostic mammography follow-up at facilities serving vulnerable women.

    Science.gov (United States)

    Goldman, L Elizabeth; Walker, Rod; Hubbard, Rebecca; Kerlikowske, Karla

    2013-04-01

    Whether timeliness of follow-up after abnormal mammography differs at facilities serving vulnerable populations, such as women with limited education or income, in rural areas, and racial/ethnic minorities is unknown. We examined receipt of diagnostic evaluation after abnormal mammography using 1998-2006 Breast Cancer Surveillance Consortium-linked Medicare claims. We compared whether time to recommended breast imaging or biopsy depended on whether women attended facilities serving vulnerable populations. We characterized a facility by the proportion of mammograms performed on women with limited education or income, in rural areas, or racial/ethnic minorities. We analyzed 30,874 abnormal screening examinations recommended for follow-up imaging across 142 facilities and 10,049 abnormal diagnostic examinations recommended for biopsy across 114 facilities. Women at facilities serving populations with less education or more racial/ethnic minorities had lower rates of follow-up imaging (4%-5% difference, Pfacilities serving more rural and low-income populations had lower rates of biopsy (4%-5% difference, Pfacilities serving vulnerable populations had longer times until biopsy than those at facilities serving nonvulnerable populations (21.6 vs. 15.6 d; 95% confidence interval for mean difference 4.1-7.7). The proportion of women receiving recommended imaging within 11 months and biopsy within 3 months varied across facilities (interquartile range, 85.5%-96.5% for imaging and 79.4%-87.3% for biopsy). Among Medicare recipients, follow-up rates were slightly lower at facilities serving vulnerable populations, and among those women who returned for diagnostic evaluation, time to follow-up was slightly longer at facilities that served vulnerable population. Interventions should target variability in follow-up rates across facilities, and evaluate effectiveness particularly at facilities serving vulnerable populations.

  20. Research on E-Commerce Platform-Based Personalized Recommendation Algorithm

    Directory of Open Access Journals (Sweden)

    Zhijun Zhang

    2016-01-01

    Full Text Available Aiming at data sparsity and timeliness in traditional E-commerce collaborative filtering recommendation algorithms, when constructing user-item rating matrix, this paper utilizes the feature that commodities in E-commerce system belong to different levels to fill in nonrated items by calculating RF/IRF of the commodity’s corresponding level. In the recommendation prediction stage, considering timeliness of the recommendation system, time weighted based recommendation prediction formula is adopted to design a personalized recommendation model by integrating level filling method and rating time. The experimental results on real dataset verify the feasibility and validity of the algorithm and it owns higher predicting accuracy compared with present recommendation algorithms.

  1. Accuracy limit of rigid 3-point water models

    Science.gov (United States)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  2. Diagnostic accuracy of sonography for pleural effusion: systematic review

    Directory of Open Access Journals (Sweden)

    Alexandre Grimberg

    Full Text Available CONTEXT AND OBJECTIVE: The initial method for evaluating the presence of pleural effusion was chest radiography. Isolated studies have shown that sonography has greater accuracy than radiography for this diagnosis; however, no systematic reviews on this matter are available in the literature. Thus, the aim of this study was to evaluate the accuracy of sonography in detecting pleural effusion, by means of a systematic review of the literature. DESIGN AND SETTING: This was a systematic review with meta-analysis on accuracy studies. This study was conducted in the Department of Diagnostic Imaging and in the Brazilian Cochrane Center, Discipline of Emergency Medicine and Evidence-Based Medicine, Department of Medicine, Universidade Federal de São Paulo (Unifesp, São Paulo, Brazil. METHOD: The following databases were searched: Cochrane Library, Medline, Web of Science, Embase and Literatura Latino-Americana e do Caribe em Ciências da Saúde (Lilacs. The references of relevant studies were also screened for additional citations of interest. Studies in which the accuracy of sonography for detecting pleural effusion was tested, with an acceptable reference standard (computed tomography or thoracic drainage, were included. RESULTS: Four studies were included. All of them showed that sonography had high sensitivity, specificity and accuracy for detecting pleural effusions. The mean sensitivity was 93% (95% confidence interval, CI: 89% to 96%, and specificity was 96% (95% CI: 95% to 98%. CONCLUSIONS: In different populations and clinical settings, sonography showed consistently high sensitivity, specificity and accuracy for detecting fluid in the pleural space.

  3. Alpha power gates relevant information during working memory updating.

    Science.gov (United States)

    Manza, Peter; Hau, Chui Luen Vera; Leung, Hoi-Chung

    2014-04-23

    Human working memory (WM) is inherently limited, so we must filter out irrelevant information in our environment or our mind while retaining limited important relevant contents. Previous work suggests that neural oscillations in the alpha band (8-14 Hz) play an important role in inhibiting incoming distracting information during attention and selective encoding tasks. However, whether alpha power is involved in inhibiting no-longer-relevant content or in representing relevant WM content is still debated. To clarify this issue, we manipulated the amount of relevant/irrelevant information using a task requiring spatial WM updating while measuring neural oscillatory activity via EEG and localized current sources across the scalp using a surface Laplacian transform. An initial memory set of two, four, or six spatial locations was to be memorized over a delay until an updating cue was presented indicating that only one or three locations remained relevant for a subsequent recognition test. Alpha amplitude varied with memory maintenance and updating demands among a cluster of left frontocentral electrodes. Greater postcue alpha power was associated with the high relevant load conditions (six and four dots cued to reduce to three relevant) relative to the lower load conditions (four and two dots reduced to one). Across subjects, this difference in alpha power was correlated with condition differences in performance accuracy. In contrast, no significant effects of irrelevant load were observed. These findings demonstrate that, during WM updating, alpha power reflects maintenance of relevant memory contents rather than suppression of no-longer-relevant memory traces.

  4. Evaluating radiographers' diagnostic accuracy in screen-reading mammograms: what constitutes a quality study?

    International Nuclear Information System (INIS)

    Debono, Josephine C; Poulos, Ann E

    2015-01-01

    The aim of this study was to first evaluate the quality of studies investigating the diagnostic accuracy of radiographers as mammogram screen-readers and then to develop an adapted tool for determining the quality of screen-reading studies. A literature search was used to identify relevant studies and a quality evaluation tool constructed by combining the criteria for quality of Whiting, Rutjes, Dinnes et al. and Brealey and Westwood. This constructed tool was then applied to the studies and subsequently adapted specifically for use in evaluating quality in studies investigating diagnostic accuracy of screen-readers. Eleven studies were identified and the constructed tool applied to evaluate quality. This evaluation resulted in the identification of quality issues with the studies such as potential for bias, applicability of results, study conduct, reporting of the study and observer characteristics. An assessment of the applicability and relevance of the tool for this area of research resulted in adaptations to the criteria and the development of a tool specifically for evaluating diagnostic accuracy in screen-reading. This tool, with further refinement and rigorous validation can make a significant contribution to promoting well-designed studies in this important area of research and practice

  5. A Parallel Relational Database Management System Approach to Relevance Feedback in Information Retrieval.

    Science.gov (United States)

    Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David

    1999-01-01

    Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…

  6. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    Science.gov (United States)

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  7. International entrepreneurship research in emerging economies : A critical review and research agenda

    NARCIS (Netherlands)

    Kiss, A.N.; Danis, W.D.; Cavusgil, S.T.

    This article systematically reviews and critically examines international entrepreneurship research in emerging economies (IEEE research), and articulates its importance, timeliness and relevance in consideration of the growing influence of emerging markets in the global economy. A systematic

  8. On the automated assessment of nuclear reactor systems code accuracy

    International Nuclear Information System (INIS)

    Kunz, Robert F.; Kasmala, Gerald F.; Mahaffy, John H.; Murray, Christopher J.

    2002-01-01

    An automated code assessment program (ACAP) has been developed to provide quantitative comparisons between nuclear reactor systems (NRS) code results and experimental measurements. The tool provides a suite of metrics for quality of fit to specific data sets, and the means to produce one or more figures of merit (FOM) for a code, based on weighted averages of results from the batch execution of a large number of code-experiment and code-code data comparisons. Accordingly, this tool has the potential to significantly streamline the verification and validation (V and V) processes in NRS code development environments which are characterized by rapidly evolving software, many contributing developers and a large and growing body of validation data. In this paper, a survey of data conditioning and analysis techniques is summarized which focuses on their relevance to NRS code accuracy assessment. A number of methods are considered for their applicability to the automated assessment of the accuracy of NRS code simulations. A variety of data types and computational modeling methods are considered from a spectrum of mathematical and engineering disciplines. The goal of the survey was to identify needs, issues and techniques to be considered in the development of an automated code assessment procedure, to be used in United States Nuclear Regulatory Commission (NRC) advanced thermal-hydraulic T/H code consolidation efforts. The ACAP software was designed based in large measure on the findings of this survey. An overview of this tool is summarized and several NRS data applications are provided. The paper is organized as follows: The motivation for this work is first provided by background discussion that summarizes the relevance of this subject matter to the nuclear reactor industry. Next, the spectrum of NRS data types are classified into categories, in order to provide a basis for assessing individual comparison methods. Then, a summary of the survey is provided, where each

  9. Effects of Objective and Subjective Competence on the Reliability of Crowdsourced Relevance Judgments

    Science.gov (United States)

    Samimi, Parnia; Ravana, Sri Devi; Webber, William; Koh, Yun Sing

    2017-01-01

    Introduction: Despite the popularity of crowdsourcing, the reliability of crowdsourced output has been questioned since crowdsourced workers display varied degrees of attention, ability and accuracy. It is important, therefore, to understand the factors that affect the reliability of crowdsourcing. In the context of producing relevance judgments,…

  10. The Web as a Reference Tool: Comparisons with Traditional Sources.

    Science.gov (United States)

    Janes, Joseph; McClure, Charles R.

    1999-01-01

    This preliminary study suggests that the same level of timeliness and accuracy can be obtained for answers to reference questions using resources in freely available World Wide Web sites as with traditional print-based resources. Discusses implications for library collection development, new models of consortia, training needs, and costing and…

  11. Comparative Dose Accuracy of Durable and Patch Insulin Infusion Pumps

    Science.gov (United States)

    Jahn, Luis G.; Capurro, Jorge J.; Levy, Brian L.

    2013-01-01

    Background: As all major insulin pump manufacturers comply with the international infusion pump standard EN 60601-2-24:1998, there may be a general assumption that all pumps are equal in insulin-delivery accuracy. This research investigates single-dose and averaged-dose accuracy of incremental basal deliveries for one patch model and three durable models of insulin pumps. Method: For each pump model, discrete single doses delivered during 0.5 U/h basal rate infusion over a 20 h period were measured using a time-stamped microgravimetric system. Dose accuracy was analyzed by comparing single doses and time-averaged doses to specific accuracy thresholds (±5% to ±30%). Results: The percentage of single doses delivered outside accuracy thresholds of ±5%, ±10%, and ±20% were as follows: Animas OneTouch® Ping® (43.2%, 14.3%, and 1.8%, respectively), Roche Accu-Chek® Combo (50.6%, 24.4%, and 5.5%), Medtronic Paradigm® RevelTM/VeoTM (54.2%, 26.7%, and 6.6%), and Insulet OmniPod® (79.1%, 60.5%, and 34.9%). For 30 min, 1 h, and 2 h averaging windows, the percentage of doses delivered outside a ±15% accuracy were as follows: OneTouch Ping (1.0%, 0.4%, and 0%, respectively), Accu-Chek Combo (4.2%, 3.5%, and 3.1%), Paradigm Revel/Veo (3.9%, 3.1%, and 2.2%), and OmniPod (33.9%, 19.9%, and 10.3%). Conclusions: This technical evaluation demonstrates significant differences in single-dose and averaged-dose accuracy among the insulin pumps tested. Differences in dose accuracy were most evident between the patch pump model and the group of durable pump models. Of the pumps studied, the Animas OneTouch Ping demonstrated the best single-dose and averaged-dose accuracy. Further research on the clinical relevance of these findings is warranted. PMID:23911184

  12. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 – 4.9 % and 1.9 – 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 – 0.006 % and 0.002 – 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respectively. GRLVQ permits objective feature reduction by inclusion of all processing stages, but is not as accurate as SVM.

  13. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 - 4.9 % and 1.9 - 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 - 0.006 % and 0.002 - 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respec-tively. GRLVQ permits objective feature reduction by inclu-sion of all processing stages, but is not as accurate as SVM.

  14. Target Price Accuracy

    Directory of Open Access Journals (Sweden)

    Alexander G. Kerl

    2011-04-01

    Full Text Available This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio. However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.

  15. A multiple relevance feedback strategy with positive and negative models.

    Directory of Open Access Journals (Sweden)

    Yunlong Ma

    Full Text Available A commonly used strategy to improve search accuracy is through feedback techniques. Most existing work on feedback relies on positive information, and has been extensively studied in information retrieval. However, when a query topic is difficult and the results from the first-pass retrieval are very poor, it is impossible to extract enough useful terms from a few positive documents. Therefore, the positive feedback strategy is incapable to improve retrieval in this situation. Contrarily, there is a relatively large number of negative documents in the top of the result list, and it has been confirmed that negative feedback strategy is an important and useful way for adapting this scenario by several recent studies. In this paper, we consider a scenario when the search results are so poor that there are at most three relevant documents in the top twenty documents. Then, we conduct a novel study of multiple strategies for relevance feedback using both positive and negative examples from the first-pass retrieval to improve retrieval accuracy for such difficult queries. Experimental results on these TREC collections show that the proposed language model based multiple model feedback method which is generally more effective than both the baseline method and the methods using only positive or negative model.

  16. Using the Characteristics of Documents, Users and Tasks to Predict the Situational Relevance of Health Web Documents

    Directory of Open Access Journals (Sweden)

    Melinda Oroszlányová

    2017-09-01

    Full Text Available Relevance is usually estimated by search engines using document content, disregarding the user behind the search and the characteristics of the task. In this work, we look at relevance as framed in a situational context, calling it situational relevance, and analyze whether it is possible to predict it using documents, users and tasks characteristics. Using an existing dataset composed of health web documents, relevance judgments for information needs, user and task characteristics, we build a multivariate prediction model for situational relevance. Our model has an accuracy of 77.17%. Our findings provide insights into features that could improve the estimation of relevance by search engines, helping to conciliate the systemic and situational views of relevance. In a near future we will work on the automatic assessment of document, user and task characteristics.

  17. Diagnostic accuracy of sonoelastography in different diseases

    Directory of Open Access Journals (Sweden)

    Iqra Manzoor

    2018-03-01

    Full Text Available The objective of this study was to evaluate the diagnostic accuracy of sonoelastography in patients of primary and secondary health care settings. Google scholar, PubMed, Medline, Medscape, Wikipedia and NCBI were searched in October 2017 for all original studies and review articles to identify the relevant material. Two reviewers independently selected articles for evaluation of the diagnostic accuracy of sonoelastography in different diseases based on titles and abstracts retrieved by the literature search. The accuracy of sonoelastography in different diseases was used as the index text, while B-mode sonography, micro pure imaging, surgery and histological findings were used as reference texts. Superficial lymph nodes, neck nodules, malignancy in thyroid nodules, benign and malignant cervical lymph nodes, thyroid nodules, prostate carcinoma, benign and malignant breast abnormalities, liver diseases, parotid and salivary gland masses, pancreatic masses, musculoskeletal diseases and renal disorders were target conditions. The data extracted by the two reviewers concerning selected study characteristics and results were presented in tables and figures. In total, 46 studies were found for breast masses, lymph nodes, prostate carcinoma, liver diseases, salivary and parotid gland diseases, pancreatic masses, musculoskeletal diseases and renal diseases, and the overall sensitivity of sonoelastography in diagnosing all these diseases was 83.14% while specificity was 81.41%. This literature review demonstrates that sonoelastography is characterized by high sensitivity and specificity in diagnosing different disorders of the body.

  18. An explanatory study of the use of e-mail investor communication by South African listed companies

    Directory of Open Access Journals (Sweden)

    Roelof Baard

    2016-12-01

    Objectives: The objectives of the study were to measure the responsiveness, timeliness and relevance of companies’ responses to e-mail requests, and to test for the determinants (size, market-to-book ratio, profitability, leverage and liquidity thereof. Method: The mystery investor approach and a content analysis were used to study the e-mail handling performance of companies. The associations between company-specific characteristics were statistically tested. Results: It was found that the e-mail handling performance of companies in this study was poor compared with previous studies. Significant relationships between company size and responsiveness and relevance, and between market-to-book ratio and relevance were reported, as well as between the contact method used to request information and relevance and the use of social media and timeliness. Conclusion: Specific areas where companies could improve their investor communications were identified. The need for further research was discussed to explain some of the relationships found, as well as those not found, in contrast to what was expected. Future research is warranted to examine the relationship between the e-mail handling performance of companies and information asymmetry and the cost of equity of companies.

  19. Why relevance theory is relevant for lexicography

    DEFF Research Database (Denmark)

    Bothma, Theo; Tarp, Sven

    2014-01-01

    This article starts by providing a brief summary of relevance theory in information science in relation to the function theory of lexicography, explaining the different types of relevance, viz. objective system relevance and the subjective types of relevance, i.e. topical, cognitive, situational...... that is very important for lexicography as well as for information science, viz. functional relevance. Since all lexicographic work is ultimately aimed at satisfying users’ information needs, the article then discusses why the lexicographer should take note of all these types of relevance when planning a new...... dictionary project, identifying new tasks and responsibilities of the modern lexicographer. The article furthermore discusses how relevance theory impacts on teaching dictionary culture and reference skills. By integrating insights from lexicography and information science, the article contributes to new...

  20. ACCURACY AND RELIABILITY AS CRITERIA OF INFORMATIVENESS IN THE NEWS STORY

    Directory of Open Access Journals (Sweden)

    Melnikova Ekaterina Aleksandrovna

    2014-12-01

    Full Text Available The article clarifies the meaning of the terms accuracy and reliability of the news story, offers a researcher's approach to obtaining objective data that helps to verify linguistic means of accuracy and reliability presence in the informative structure of the text. The accuracy of the news story is defined as a high relevance degree of event reflection through language representation of its constituents; the reliability is viewed as news story originality that is proved by introducing citations and sources of information considered being trustworthy into the text content. Having based the research on an event nominative density identification method, the author composed nominative charts of 115 news story texts, collected at web-sites of BBC and CNN media corporations; distinguished qualitative and quantitative markers of accuracy and reliability in the news story text; confirmed that the accuracy of the news story is achieved with terminological clearness in nominating event constituents in the text, thematic bind between words, presence of onyms that help deeply identify characteristics of the referent event. The reliability of the text is discovered in eyewitness accounts, quotations, and references to the sources being considered as trustworthy. Accurate revision of associations between accuracy and reliability and informing strategies in digital news nets allowed the author to set two variants of information delivery, that differ in their communicative and pragmatic functions: developing (that informs about major and minor details of an event and truncated (which gives some details thus raising the interest to the event and urging a reader to open a full story.

  1. Diagnostic accuracy of postmortem imaging vs autopsy-A systematic review.

    Science.gov (United States)

    Eriksson, Anders; Gustafsson, Torfinn; Höistad, Malin; Hultcrantz, Monica; Jacobson, Stella; Mejare, Ingegerd; Persson, Anders

    2017-04-01

    Background Postmortem imaging has been used for more than a century as a complement to medico-legal autopsies. The technique has also emerged as a possible alternative to compensate for the continuous decline in the number of clinical autopsies. To evaluate the diagnostic accuracy of postmortem imaging for various types of findings, we performed this systematic literature review. Data sources The literature search was performed in the databases PubMed, Embase and Cochrane Library through January 7, 2015. Relevant publications were assessed for risk of bias using the QUADAS tool and were classified as low, moderate or high risk of bias according to pre-defined criteria. Autopsy and/or histopathology were used as reference standard. Findings The search generated 2600 abstracts, of which 340 were assessed as possibly relevant and read in full-text. After further evaluation 71 studies were finally included, of which 49 were assessed as having high risk of bias and 22 as moderate risk of bias. Due to considerable heterogeneity - in populations, techniques, analyses and reporting - of included studies it was impossible to combine data to get a summary estimate of the diagnostic accuracy of the various findings. Individual studies indicate, however, that imaging techniques might be useful for determining organ weights, and that the techniques seem superior to autopsy for detecting gas Conclusions and Implications In general, based on the current scientific literature, it was not possible to determine the diagnostic accuracy of postmortem imaging and its usefulness in conjunction with, or as an alternative to autopsy. To correctly determine the usefulness of postmortem imaging, future studies need improved planning, improved methodological quality and larger materials, preferentially obtained from multi-center studies. Copyright © 2016. Published by Elsevier B.V.

  2. A pre-admission program for underrepresented minority and disadvantaged students: application, acceptance, graduation rates and timeliness of graduating from medical school.

    Science.gov (United States)

    Strayhorn, G

    2000-04-01

    To determine whether students' performances in a pre-admission program predicted whether participants would (1) apply to medical school, (2) get accepted, and (3) graduate. Using prospectively collected data from participants in the University of North Carolina at Chapel Hill's Medical Education Development Program (MEDP) and data from the Association of American Colleges Student and Applicant Information Management System, the author identified 371 underrepresented minority (URM) students who were full-time participants and completed the program between 1984 and 1989, prior to their acceptance into medical school. Logistic regression analysis was used to determine whether MEDP performance significantly predicted (after statistically controlling for traditional predictors of these outcomes) the proportions of URM participants who applied to medical school and were accepted, the timeliness of graduating, and the proportion graduating. Odds ratios with 95% confidence intervals were calculated to determine the associations between the independent and outcome variables. In separate logistic regression models, MEDP performance predicted the study's outcomes after statistically controlling for traditional predictors with 95% confidence intervals. Pre-admission programs with similar outcomes can improve the diversity of the physician workforce and the access to health care for underrepresented minority and economically disadvantaged populations.

  3. Diagnostic accuracy of general physician versus emergency medicine specialist in interpretation of chest X-ray suspected for iatrogenic pneumothorax: a brief report

    Directory of Open Access Journals (Sweden)

    Ghane Mohammad-reza

    2012-03-01

    Conclusion: These findings indicate that the diagnostic accuracy of emergency medicine specialists is significantly higher than those of general physicians. The diagnostic accuracy of both physician groups was higher than the values in similar studies that signifies the role of relevant training given in the emergency departments of the Hospital.

  4. Global discriminative learning for higher-accuracy computational gene prediction.

    Directory of Open Access Journals (Sweden)

    Axel Bernal

    2007-03-01

    Full Text Available Most ab initio gene predictors use a probabilistic sequence model, typically a hidden Markov model, to combine separately trained models of genomic signals and content. By combining separate models of relevant genomic features, such gene predictors can exploit small training sets and incomplete annotations, and can be trained fairly efficiently. However, that type of piecewise training does not optimize prediction accuracy and has difficulty in accounting for statistical dependencies among different parts of the gene model. With genomic information being created at an ever-increasing rate, it is worth investigating alternative approaches in which many different types of genomic evidence, with complex statistical dependencies, can be integrated by discriminative learning to maximize annotation accuracy. Among discriminative learning methods, large-margin classifiers have become prominent because of the success of support vector machines (SVM in many classification tasks. We describe CRAIG, a new program for ab initio gene prediction based on a conditional random field model with semi-Markov structure that is trained with an online large-margin algorithm related to multiclass SVMs. Our experiments on benchmark vertebrate datasets and on regions from the ENCODE project show significant improvements in prediction accuracy over published gene predictors that use intrinsic features only, particularly at the gene level and on genes with long introns.

  5. Accuracy of clinical coding for procedures in oral and maxillofacial surgery.

    Science.gov (United States)

    Khurram, S A; Warner, C; Henry, A M; Kumar, A; Mohammed-Ali, R I

    2016-10-01

    Clinical coding has important financial implications, and discrepancies in the assigned codes can directly affect the funding of a department and hospital. Over the last few years, numerous oversights have been noticed in the coding of oral and maxillofacial (OMF) procedures. To establish the accuracy and completeness of coding, we retrospectively analysed the records of patients during two time periods: March to May 2009 (324 patients), and January to March 2014 (200 patients). Two investigators independently collected and analysed the data to ensure accuracy and remove bias. A large proportion of operations were not assigned all the relevant codes, and only 32% - 33% were correct in both cycles. To our knowledge, this is the first reported audit of clinical coding in OMFS, and it highlights serious shortcomings that have substantial financial implications. Better input by the surgical team and improved communication between the surgical and coding departments will improve accuracy. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  6. Evaluation of Callable Bonds: Finite Difference Methods, Stability and Accuracy.

    OpenAIRE

    Buttler, Hans-Jurg

    1995-01-01

    The purpose of this paper is to evaluate numerically the semi-American callable bond by means of finite difference methods. This study implies three results. First, the numerical error is greater for the callable bond price than for the straight bond price, and too large for real applications Secondly, the numerical accuracy of the callable bond price computed for the relevant range of interest rates depends entirely on the finite difference scheme which is chosen for the boundary points. Thi...

  7. Effects of presentation modality on team awareness and choice accuracy in a simulated police team task

    NARCIS (Netherlands)

    Streefkerk, J.W.; Wiering, C.; Esch van-Bussemakers, M.; Neerincx, M.

    2008-01-01

    Team awareness is important when asking team members for assistance, for example in the police domain. This paper investigates how presentation modality (visual or auditory) of relevant team information and communication influences team awareness and choice accuracy in a collaborative team task. An

  8. Natural Gas Deliverability Task Force report: A joint FERC/DOE project

    International Nuclear Information System (INIS)

    1992-09-01

    The purpose of the FERC/DOE Natural Gas Deliverability Task Force Report was threefold: (1) to review current deliverability data for utility, accuracy, and timeliness; (2) to identify mechanisms for closing significant gaps in information resulting from changing market structures; and (3) to ensure that technologies are available to meet the needs of the emerging, competitive natural gas industry

  9. Cultural relevance of a fruit and vegetable food frequency questionnaire.

    Science.gov (United States)

    Paisley, Judy; Greenberg, Marlene; Haines, Jess

    2005-01-01

    Canada's multicultural population poses challenges for culturally competent nutrition research and practice. In this qualitative study, the cultural relevance of a widely used semi-quantitative fruit and vegetable food frequency questionnaire (FFQ) was examined among convenience samples of adults from Toronto's Cantonese-, Mandarin-, Portuguese-, and Vietnamese-speaking communities. Eighty-nine participants were recruited through community-based organizations, programs, and advertisements to participate in semi-structured interviews moderated in their native language. Data from the interviews were translated into English and transcribed for analysis using the constant comparative approach. Four main themes emerged from the analysis: the cultural relevance of the foods listed on the FFQ, words with multiple meanings, the need for culturally appropriate portion-size prompts, and the telephone survey as a Western concept. This research highlights the importance of investing resources to develop culturally relevant dietary assessment tools that ensure dietary assessment accuracy and, more important, reduce ethnocentric biases in food and nutrition research and practice. The transferability of findings must be established through further research.

  10. Recommended reporting standards for test accuracy studies of infectious diseases of finfish, amphibians, molluscs and crustaceans: the STRADAS-aquatic checklist

    Science.gov (United States)

    Gardner, Ian A; Whittington, Richard J; Caraguel, Charles G B; Hick, Paul; Moody, Nicholas J G; Corbeil, Serge; Garver, Kyle A.; Warg, Janet V.; Arzul, Isabelle; Purcell, Maureen; St. J. Crane, Mark; Waltzek, Thomas B.; Olesen, Niels J; Lagno, Alicia Gallardo

    2016-01-01

    Complete and transparent reporting of key elements of diagnostic accuracy studies for infectious diseases in cultured and wild aquatic animals benefits end-users of these tests, enabling the rational design of surveillance programs, the assessment of test results from clinical cases and comparisons of diagnostic test performance. Based on deficiencies in the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines identified in a prior finfish study (Gardner et al. 2014), we adapted the Standards for Reporting of Animal Diagnostic Accuracy Studies—paratuberculosis (STRADAS-paraTB) checklist of 25 reporting items to increase their relevance to finfish, amphibians, molluscs, and crustaceans and provided examples and explanations for each item. The checklist, known as STRADAS-aquatic, was developed and refined by an expert group of 14 transdisciplinary scientists with experience in test evaluation studies using field and experimental samples, in operation of reference laboratories for aquatic animal pathogens, and in development of international aquatic animal health policy. The main changes to the STRADAS-paraTB checklist were to nomenclature related to the species, the addition of guidelines for experimental challenge studies, and the designation of some items as relevant only to experimental studies and ante-mortem tests. We believe that adoption of these guidelines will improve reporting of primary studies of test accuracy for aquatic animal diseases and facilitate assessment of their fitness-for-purpose. Given the importance of diagnostic tests to underpin the Sanitary and Phytosanitary agreement of the World Trade Organization, the principles outlined in this paper should be applied to other World Organisation for Animal Health (OIE)-relevant species.

  11. Identifying noncoding risk variants using disease-relevant gene regulatory networks.

    Science.gov (United States)

    Gao, Long; Uzun, Yasin; Gao, Peng; He, Bing; Ma, Xiaoke; Wang, Jiahui; Han, Shizhong; Tan, Kai

    2018-02-16

    Identifying noncoding risk variants remains a challenging task. Because noncoding variants exert their effects in the context of a gene regulatory network (GRN), we hypothesize that explicit use of disease-relevant GRNs can significantly improve the inference accuracy of noncoding risk variants. We describe Annotation of Regulatory Variants using Integrated Networks (ARVIN), a general computational framework for predicting causal noncoding variants. It employs a set of novel regulatory network-based features, combined with sequence-based features to infer noncoding risk variants. Using known causal variants in gene promoters and enhancers in a number of diseases, we show ARVIN outperforms state-of-the-art methods that use sequence-based features alone. Additional experimental validation using reporter assay further demonstrates the accuracy of ARVIN. Application of ARVIN to seven autoimmune diseases provides a holistic view of the gene subnetwork perturbed by the combinatorial action of the entire set of risk noncoding mutations.

  12. Automatic classification and detection of clinically relevant images for diabetic retinopathy

    Science.gov (United States)

    Xu, Xinyu; Li, Baoxin

    2008-03-01

    We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.

  13. Creation of reliable relevance judgments in information retrieval systems evaluation experimentation through crowdsourcing: a review.

    Science.gov (United States)

    Samimi, Parnia; Ravana, Sri Devi

    2014-01-01

    Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.

  14. Application of Multilabel Learning Using the Relevant Feature for Each Label in Chronic Gastritis Syndrome Diagnosis

    Science.gov (United States)

    Liu, Guo-Ping; Yan, Jian-Jun; Wang, Yi-Qin; Fu, Jing-Jing; Xu, Zhao-Xia; Guo, Rui; Qian, Peng

    2012-01-01

    Background. In Traditional Chinese Medicine (TCM), most of the algorithms are used to solve problems of syndrome diagnosis that only focus on one syndrome, that is, single label learning. However, in clinical practice, patients may simultaneously have more than one syndrome, which has its own symptoms (signs). Methods. We employed a multilabel learning using the relevant feature for each label (REAL) algorithm to construct a syndrome diagnostic model for chronic gastritis (CG) in TCM. REAL combines feature selection methods to select the significant symptoms (signs) of CG. The method was tested on 919 patients using the standard scale. Results. The highest prediction accuracy was achieved when 20 features were selected. The features selected with the information gain were more consistent with the TCM theory. The lowest average accuracy was 54% using multi-label neural networks (BP-MLL), whereas the highest was 82% using REAL for constructing the diagnostic model. For coverage, hamming loss, and ranking loss, the values obtained using the REAL algorithm were the lowest at 0.160, 0.142, and 0.177, respectively. Conclusion. REAL extracts the relevant symptoms (signs) for each syndrome and improves its recognition accuracy. Moreover, the studies will provide a reference for constructing syndrome diagnostic models and guide clinical practice. PMID:22719781

  15. Application of Multilabel Learning Using the Relevant Feature for Each Label in Chronic Gastritis Syndrome Diagnosis

    Directory of Open Access Journals (Sweden)

    Guo-Ping Liu

    2012-01-01

    Full Text Available Background. In Traditional Chinese Medicine (TCM, most of the algorithms are used to solve problems of syndrome diagnosis that only focus on one syndrome, that is, single label learning. However, in clinical practice, patients may simultaneously have more than one syndrome, which has its own symptoms (signs. Methods. We employed a multilabel learning using the relevant feature for each label (REAL algorithm to construct a syndrome diagnostic model for chronic gastritis (CG in TCM. REAL combines feature selection methods to select the significant symptoms (signs of CG. The method was tested on 919 patients using the standard scale. Results. The highest prediction accuracy was achieved when 20 features were selected. The features selected with the information gain were more consistent with the TCM theory. The lowest average accuracy was 54% using multi-label neural networks (BP-MLL, whereas the highest was 82% using REAL for constructing the diagnostic model. For coverage, hamming loss, and ranking loss, the values obtained using the REAL algorithm were the lowest at 0.160, 0.142, and 0.177, respectively. Conclusion. REAL extracts the relevant symptoms (signs for each syndrome and improves its recognition accuracy. Moreover, the studies will provide a reference for constructing syndrome diagnostic models and guide clinical practice.

  16. THE RELEVANCE OF GOODWILL REPORTING IN AN ISLAMIC CONTEXT

    Directory of Open Access Journals (Sweden)

    Radu-Daniel LOGHIN

    2014-11-01

    Full Text Available In recent years global finance has seen the emergence of Islamic finance as an alternative to the western secular system. While the two systems posses largely similar concepts of social equity and well-being the major divide between them rests in the distinction between divine and natural law as a source of protection for the downtrodden. As communication barriers between the Arabic and Anglo-European accounting systems start to blur, the question posed for the practitioners as to what constitutes a source of equity becomes more and more relevant. Considering the case of Islamic countries, besides internally-generated and acquired goodwill Islamic sources of social equity such as zakat also provide a source of social equity. For the purpose of this paper, two models pertaining to value relevance are tested for a sample of 56 companies in 6 accounting jurisdictions with the purpose of identifying the underlying sources of social equity revealing that zakat disclosures marginally improve the accuracy of the model.

  17. Response moderation models for conditional dependence between response time and response accuracy.

    Science.gov (United States)

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan

    2017-05-01

    It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.

  18. Department of Defense Office of the Inspector General FY 2013 Audit Plan

    Science.gov (United States)

    2012-11-01

    oversight procedures to review KPMG LLPs work; and if applicable disclose instances where KPMG LLP does not comply, in all material respects, with U.S...decisions. Pervasive material internal control weaknesses impact the accuracy, reliability and timeliness of budgetary and accounting data and...reported the same 13 material internal control weaknesses as in the previous year. These pervasive and longstanding financial management challenges

  19. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    Science.gov (United States)

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.

  20. The philosophy of information quality

    CERN Document Server

    Illari, Phyllis

    2014-01-01

    This work fulfills the need for a conceptual and technical framework to improve understanding of Information Quality (IQ) and Information Quality standards. The meaning and practical implementation of IQ are addressed, as it is relevant to any field where there is a need to handle data and issues such as accessibility, accuracy, completeness, currency, integrity, reliability, timeliness, usability, the role of metrics and so forth are all a part of Information Quality. In order to support the cross-fertilization of theory and practice, the latest research is presented in this book. The perspectives of experts from beyond the origins of IQ in computer science are included: library and information science practitioners and academics, philosophers of information, of engineering and technology, and of science are all contributors to this volume. The chapters in this volume are based on the work of a collaborative research project involving the Arts and Humanities Research Council and Google and led by Professor...

  1. Ideology and Critical Self-Reflection in Information Literacy Instruction

    Science.gov (United States)

    Critten, Jessica

    2015-01-01

    Information literacy instruction traditionally focuses on evaluating a source for bias, relevance, and timeliness, and rightfully so; this critical perspective is vital to a well-formed research process. However, this process is incomplete without a similar focus on the potential biases that the student brings to his or her interactions with…

  2. Can Consumers Trust Web-Based Information About Celiac Disease? Accuracy, Comprehensiveness, Transparency, and Readability of Information on the Internet

    Science.gov (United States)

    McNally, Shawna L; Donohue, Michael C; Newton, Kimberly P; Ogletree, Sandra P; Conner, Kristen K; Ingegneri, Sarah E

    2012-01-01

    98 (52%) websites contained less than 50% of the core celiac disease information that was considered important for inclusion on websites that provide general information about celiac disease. Academic websites were significantly less transparent (P = .005) than commercial websites in attributing authorship, timeliness of information, sources of information, and other important disclosures. The type of website publisher did not predict website accuracy, comprehensiveness, or overall website quality. Only 4 of 98 (4%) websites achieved an overall quality score of 80 or above, which a priori was set as the minimum score for a website to be judged trustworthy and reliable. Conclusions The information on many websites addressing celiac disease was not sufficiently accurate, comprehensive, and transparent, or presented at an appropriate reading grade level, to be considered sufficiently trustworthy and reliable for patients, health care providers, celiac disease support groups, and the general public. This has the potential to adversely affect decision making about important aspects of celiac disease, including its appropriate and proper diagnosis, treatment, and management. PMID:23611901

  3. Can consumers trust web-based information about celiac disease? Accuracy, comprehensiveness, transparency, and readability of information on the internet.

    Science.gov (United States)

    McNally, Shawna L; Donohue, Michael C; Newton, Kimberly P; Ogletree, Sandra P; Conner, Kristen K; Ingegneri, Sarah E; Kagnoff, Martin F

    2012-04-04

    50% of the core celiac disease information that was considered important for inclusion on websites that provide general information about celiac disease. Academic websites were significantly less transparent (P = .005) than commercial websites in attributing authorship, timeliness of information, sources of information, and other important disclosures. The type of website publisher did not predict website accuracy, comprehensiveness, or overall website quality. Only 4 of 98 (4%) websites achieved an overall quality score of 80 or above, which a priori was set as the minimum score for a website to be judged trustworthy and reliable. The information on many websites addressing celiac disease was not sufficiently accurate, comprehensive, and transparent, or presented at an appropriate reading grade level, to be considered sufficiently trustworthy and reliable for patients, health care providers, celiac disease support groups, and the general public. This has the potential to adversely affect decision making about important aspects of celiac disease, including its appropriate and proper diagnosis, treatment, and management.

  4. Prognostic accuracy of electroencephalograms in preterm infants

    DEFF Research Database (Denmark)

    Fogtmann, Emilie Pi; Plomgaard, Anne Mette; Greisen, Gorm

    2017-01-01

    CONTEXT: Brain injury is common in preterm infants, and predictors of neurodevelopmental outcome are relevant. OBJECTIVE: To assess the prognostic test accuracy of the background activity of the EEG recorded as amplitude-integrated EEG (aEEG) or conventional EEG early in life in preterm infants...... for predicting neurodevelopmental outcome. DATA SOURCES: The Cochrane Library, PubMed, Embase, and the Cumulative Index to Nursing and Allied Health Literature. STUDY SELECTION: We included observational studies that had obtained an aEEG or EEG within 7 days of life in preterm infants and reported...... neurodevelopmental outcomes 1 to 10 years later. DATA EXTRACTION: Two reviewers independently performed data extraction with regard to participants, prognostic testing, and outcomes. RESULTS: Thirteen observational studies with a total of 1181 infants were included. A metaanalysis was performed based on 3 studies...

  5. Effects of the audit committee and the fiscal council on earnings quality in Brazil

    Directory of Open Access Journals (Sweden)

    Vitor Gomes Baioco

    2017-03-01

    Full Text Available ABSTRACT This study evaluates the effects of the audit committee and the fiscal council with their different characteristics on earnings quality in Brazil. The proxies of earnings quality used are: relevance of accounting information, timeliness, and conditional conservatism. The sample consists of Brazilian companies listed on the Brazilian Securities, Commodities, and Futures Exchange (BM&FBOVESPA with annual liquidity above 0.001 within the period from 2010 to 2013. Data were collected from the database Comdinheiro and the Reference Forms of companies available on the website of the Brazilian Securities and Exchange Commission (CVM or the BM&FBOVESPA. The samples used in the study totaled 718, 688, and 722 observations for the value relevance, timeliness, and conditional conservatism models, respectively. The results indicate that different arrangements of the fiscal council and the existence of the audit committee differently impact the accounting information properties. The presence of the fiscal council positively impacted the relevance of equity, while the presence of the audit committee, the relevance of earnings. Conditional conservatism is evidenced in the group of companies with a permanent fiscal council, demonstrating that it is significant as a governance mechanism, rather than the installation for temporary operation when asked by shareholders in an ordinary general meeting. The presence of both showed significant earnings for the market, but they were not timely, something which exposes restriction to the relevance found. Lastly, the powered fiscal council showed a positive association only concerning the relevance of equity.

  6. Accuracy of Environmental Monitoring in China: Exploring the Influence of Institutional, Political and Ideological Factors

    Directory of Open Access Journals (Sweden)

    Daniele Brombal

    2017-02-01

    Full Text Available Environmental monitoring data are essential to informing decision-making processes relevant to the management of the environment. Their accuracy is therefore of extreme importance. The credibility of Chinese environmental data has been long questioned by domestic and foreign observers. This paper explores the potential impact of institutional, political, and ideological factors on the accuracy of China’s environmental monitoring data. It contends that the bureaucratic incentive system, conflicting agency goals, particular interests, and ideological structures constitute potential sources of bias in processes of environmental monitoring in China. The current leadership has acknowledged the issue, implementing new measures to strengthen administrative coordination and reinforce the oversight of the central government over local authorities. However, the failure to address the deeper political roots of the problem and the ambivalence over the desirability of public participation to enhance transparency might jeopardize Beijing’s strive for environmental data accuracy.

  7. Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.

    Science.gov (United States)

    Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.

  8. Attitude importance and the accumulation of attitude-relevant knowledge in memory.

    Science.gov (United States)

    Holbrook, Allyson L; Berent, Matthew K; Krosnick, Jon A; Visser, Penny S; Boninger, David S

    2005-05-01

    People who attach personal importance to an attitude are especially knowledgeable about the attitude object. This article tests an explanation for this relation: that importance causes the accumulation of knowledge by inspiring selective exposure to and selective elaboration of relevant information. Nine studies showed that (a) after watching televised debates between presidential candidates, viewers were better able to remember the statements made on policy issues on which they had more personally important attitudes; (b) importance motivated selective exposure and selective elaboration: Greater personal importance was associated with better memory for relevant information encountered under controlled laboratory conditions, and manipulations eliminating opportunities for selective exposure and selective elaboration eliminated the importance-memory accuracy relation; and (c) people do not use perceptions of their knowledge volume to infer how important an attitude is to them, but importance does cause knowledge accumulation.

  9. FDG PET and CT in locally advanced adenocarcinomas of the distal oesophagus. Clinical relevance of a discordant PET finding

    International Nuclear Information System (INIS)

    Stahl, A.; Wieder, H.; Schwaiger, M.; Weber, W.A.; Stollfuss, J.; Ott, K.; Fink, U.

    2005-01-01

    Aim: the incidence of adenocarcinomas of the distal oesophagus (ADE) has dramatically increased in Western countries. The clinical importance of a FDG PET finding discordant with CT was determined in patients with locally advanced ADE. In addition, tumour standardized uptake values (SUV) were correlated with patient survival. Patients, methods: 40 consecutive patients were analyzed retrospectively. All patients underwent an attenuation corrected FDG PET scan (neck, chest, abdomen) and contrast enhanced helical CT of the chest and abdomen. PET and CT scans were reviewed independently and concomitantly with respect to metastases in predefined lymph node sites and organs. Any discordance between PET and CT was assessed for clinical relevance. Clinical relevance was defined as a change in the overall therapeutic concept (curative vs. palliative). Follow-up imaging and histological evaluation served as the gold standard. Mean tumour SUVs were determined by 1.5 cm regions of interest placed over the tumour's maximum. Results: when read independently from the CT scan FDG PET indicated a clinically relevant change in tumour stage in 9/40 patients (23%) and a non-relevant change in 11/40 patients (28%). PET was correct in 5/9 patients (56%) with clinically relevant discordances. In 4/9 patients PET was incorrect (3 false positive due to suspicion of MI-lymph nodes or lung metastases, 1 false negative in disseminated liver metastases). With concomitant reading, PET indicated a clinically relevant change in tumour stage in 6/40 patients (15%) and a non-relevant change in 5/40 patients (13%). PET was correct in 5/6 patients (83%) with clinically relevant discordances. The patient with disseminated liver disease remained the single false negative. Overall, the benefit from PET was based on its higher diagnostic accuracy at organ sites. Tumour SUV did not correlate with patient survival. Conclusion: about half of discordances between FDG PET and CT are clinically relevant

  10. Recovery Audit Contractor medical necessity readiness: one health system's journey.

    Science.gov (United States)

    Scott, Judith A; Camden, Mindy

    2011-01-01

    To develop a sustainable approach to Recovery Audit Contractor medical necessity readiness that mitigates the regulatory and financial risks of the organization. Acute care hospitals. Utilizing the model for improvement and plan-do-study-act methodology, this health system designed and implemented a medical necessity case management program. We focused on 3 areas for improvement: medical necessity review accuracy, review timeliness, and physician adviser participation for secondary reviews. Over several months, we improved accuracy and timeliness of our medical necessity reviews while also generating additional inpatient revenue for the health system. We successfully enhanced regulatory compliance and reduced our financial risks associated with Recovery Audit Contractor medical necessity audits. A successful medical necessity case management program can not only enhance regulatory compliance and reduce the amount of payments recouped by Medicare, but also generate additional inpatient revenue for your organization. With health care reform and accountable care organizations on the horizon, hospitals must find ways to protect and enhance revenue in order to carry out their missions. This is one way for case managers to help in that cause, to advocate for the care of their patients, and to bring value to the organization.

  11. Using novel computer-assisted linguistic analysis techniques to assess the timeliness and impact of FP7 Health’s research – a work in progress report

    Energy Technology Data Exchange (ETDEWEB)

    Stanciauskas, V.; Brozaitis, H.; Manola, N.; Metaxas, O.; Galsworthy, M.

    2016-07-01

    This paper presents the ongoing developments of the ex-post evaluation of the Health theme in FP7 which will be finalised in early 2017. the evaluation was launched by DG Research and Innovation, European Commission. Among other questions the evaluation asked to assess the structuring effect of FP7 Health on the European Research Area dnd the timeliness of the research performed. To this end the evalaution team has applied two innovative computerassisted linguistic analysis techniques to adderss these questions, including dynamic topic modelling and network analysis of co-publications. The topic model built for this evaluation contributed to comprehensive mapping of FP7 Health's research activities and building of a dynamic topic model that has not been attempted in previous evalautions of the Framework Programmes. Our applied network analysiswas of co-publications proved to be a powerful tool in determining the structuring effect of the FP7 Health to a level of detail which was again not implemented in previous evaluations of EU-funded research programmes. (Author)

  12. Combined Loadings and Cross-Dimensional Loadings Timeliness of Presentation of Financial Statements of Local Government

    Science.gov (United States)

    Muda, I.; Dharsuky, A.; Siregar, H. S.; Sadalia, I.

    2017-03-01

    This study examines the pattern of readiness dimensional accuracy of financial statements of local government in North Sumatra with a routine pattern of two (2) months after the fiscal year ends and patterns of at least 3 (three) months after the fiscal year ends. This type of research is explanatory survey with quantitative methods. The population and the sample used is of local government officials serving local government financial reports. Combined Analysis And Cross-Loadings Loadings are used with statistical tools WarpPLS. The results showed that there was a pattern that varies above dimensional accuracy of the financial statements of local government in North Sumatra.

  13. Corporate against corporate management

    OpenAIRE

    Runcev, Nikolce; Krstev, Boris; Golomeova, Mirjana

    2010-01-01

    In contemporary economic performance, corporate governance is considered an essential prerequisite in building a successful system for creating an attractive investment climate, which is characterized by competing companies oriented and efficient financial markets. Good corporate governance is based on principles of transparency, bias, efficiency, timeliness, completeness and accuracy of information at all levels of management. Companies with good corporate governance and afford easier acc...

  14. Sex Differences in Timeliness of Reperfusion in Young Patients With ST-Segment-Elevation Myocardial Infarction by Initial Electrocardiographic Characteristics.

    Science.gov (United States)

    Gupta, Aakriti; Barrabes, Jose A; Strait, Kelly; Bueno, Hector; Porta-Sánchez, Andreu; Acosta-Vélez, J Gabriel; Lidón, Rosa-Maria; Spatz, Erica; Geda, Mary; Dreyer, Rachel P; Lorenze, Nancy; Lichtman, Judith; D'Onofrio, Gail; Krumholz, Harlan M

    2018-03-07

    Young women with ST-segment-elevation myocardial infarction experience reperfusion delays more frequently than men. Our aim was to determine the electrocardiographic correlates of delay in reperfusion in young patients with ST-segment-elevation myocardial infarction. We examined sex differences in initial electrocardiographic characteristics among 1359 patients with ST-segment-elevation myocardial infarction in a prospective, observational, cohort study (2008-2012) of 3501 patients with acute myocardial infarction, 18 to 55 years of age, as part of the VIRGO (Variation in Recovery: Role of Gender on Outcomes of Young AMI Patients) study at 103 US and 24 Spanish hospitals enrolling in a 2:1 ratio for women/men. We created a multivariable logistic regression model to assess the relationship between reperfusion delay (door-to-balloon time >90 or >120 minutes for transfer or door-to-needle time >30 minutes) and electrocardiographic characteristics, adjusting for sex, sociodemographic characteristics, and clinical characteristics at presentation. In our study (834 women and 525 men), women were more likely to exceed reperfusion time guidelines than men (42.4% versus 31.5%; P ST elevation in lateral leads was an inverse predictor of reperfusion delay. Sex disparities in timeliness to reperfusion in young patients with ST-segment-elevation myocardial infarction persisted, despite adjusting for initial electrocardiographic characteristics. Left ventricular hypertrophy by voltage criteria and absence of prehospital ECG are strongly positively correlated and ST elevation in lateral leads is negatively correlated with reperfusion delay. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  15. Prediction of Stator Terminal Voltages in IPMSM based on Static and Transient FEM Solution: Trade-off between Accuracy and Speed of Computation

    Directory of Open Access Journals (Sweden)

    Hichem Bouras

    2017-12-01

    Full Text Available The present work deals with the calculation of the time varying induced emf in permanent magnet synchronous machines from the numerical finite element solution. A review of the existing methods is presented; their intrinsic merits, in terms of accuracy and speed of computation, are compared. The currently used method, which relies on a weighting averaging procedure of the magnetic vector potential (MVP over the slot area in order to derive the winding flux linkage and the stator induced, has been modified to enhance its accuracy. An alternative method, which relies on the magnetic vector potential distribution along the mid airgap line, is proposed to carry out the same task. This approach has turned out to be very efficient since it enables a straightforward data handling, signal reconstruction, filtering and spectrum analysis of the relevant waveforms to be easily implemented in a single post-processing function. Finally, the relevance and efficiency of each method, in terms of accuracy and speed of computation, has been confirmed by the experimental results.

  16. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    Full Text Available One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF coupled with support vector machine (SVM has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO. The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.

  17. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification

    International Nuclear Information System (INIS)

    Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen

    2016-01-01

    Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)

  18. Using Today's Headlines for Teaching Gerontology

    Science.gov (United States)

    Haber, David

    2008-01-01

    It is a challenge to attract undergraduate students into the gerontology field. Many do not believe the aging field is exciting and at the cutting edge. Students, however, can be convinced of the timeliness, relevance, and excitement of the field by, literally, bringing up today's headlines in class. The author collected over 250 articles during…

  19. Using ANFIS for selection of more relevant parameters to predict dew point temperature

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Petković, Dalibor; Yee, Por Lip; Mansor, Zulkefli

    2016-01-01

    Highlights: • ANFIS is used to select the most relevant variables for dew point temperature prediction. • Two cities from the central and south central parts of Iran are selected as case studies. • Influence of 5 parameters on dew point temperature is evaluated. • Appropriate selection of input variables has a notable effect on prediction. • Considering the most relevant combination of 2 parameters would be more suitable. - Abstract: In this research work, for the first time, the adaptive neuro fuzzy inference system (ANFIS) is employed to propose an approach for identifying the most significant parameters for prediction of daily dew point temperature (T_d_e_w). The ANFIS process for variable selection is implemented, which includes a number of ways to recognize the parameters offering favorable predictions. According to the physical factors influencing the dew formation, 8 variables of daily minimum, maximum and average air temperatures (T_m_i_n, T_m_a_x and T_a_v_g), relative humidity (R_h), atmospheric pressure (P), water vapor pressure (V_P), sunshine hour (n) and horizontal global solar radiation (H) are considered to investigate their effects on T_d_e_w. The used data include 7 years daily measured data of two Iranian cities located in the central and south central parts of the country. The results indicate that despite climate difference between the considered case studies, for both stations, V_P is the most influential variable while R_h is the least relevant element. Furthermore, the combination of T_m_i_n and V_P is recognized as the most influential set to predict T_d_e_w. The conducted examinations show that there is a remarkable difference between the errors achieved for most and less relevant input parameters, which highlights the importance of appropriate selection of input parameters. The use of more than two inputs may not be advisable and appropriate; thus, considering the most relevant combination of 2 parameters would be more suitable

  20. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  1. Meditation experience predicts introspective accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran C R Fox

    Full Text Available The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1-15,000 hrs experience. Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a 'body-scanning' meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices.

  2. End-to-end System Performance Simulation: A Data-Centric Approach

    Science.gov (United States)

    Guillaume, Arnaud; Laffitte de Petit, Jean-Luc; Auberger, Xavier

    2013-08-01

    In the early times of space industry, the feasibility of Earth observation missions was directly driven by what could be achieved by the satellite. It was clear to everyone that the ground segment would be able to deal with the small amount of data sent by the payload. Over the years, the amounts of data processed by the spacecrafts have been increasing drastically, leading to put more and more constraints on the ground segment performances - and in particular on timeliness. Nowadays, many space systems require high data throughputs and short response times, with information coming from multiple sources and involving complex algorithms. It has become necessary to perform thorough end-to-end analyses of the full system in order to optimise its cost and efficiency, but even sometimes to assess the feasibility of the mission. This paper presents a novel framework developed by Astrium Satellites in order to meet these needs of timeliness evaluation and optimisation. This framework, named ETOS (for “End-to-end Timeliness Optimisation of Space systems”), provides a modelling process with associated tools, models and GUIs. These are integrated thanks to a common data model and suitable adapters, with the aim of building suitable space systems simulators of the full end-to-end chain. A big challenge of such environment is to integrate heterogeneous tools (each one being well-adapted to part of the chain) into a relevant timeliness simulation.

  3. Development of response inhibition in the context of relevant versus irrelevant emotions

    Directory of Open Access Journals (Sweden)

    Margot A Schel

    2013-07-01

    Full Text Available The present study examined the influence of relevant and irrelevant emotions on response inhibition from childhood to early adulthood. Ninety-four participants between 6 and 25 years of age performed two go/nogo tasks with emotional faces (neutral, happy, and fearful as stimuli. In one go/nogo task emotion formed a relevant dimension of the task and in the other go/nogo task emotion was irrelevant and participants had to respond to the color of the faces instead. A special feature of the latter task, in which emotion was irrelevant, was the inclusion of free choice trials, in which participants could freely decide between acting and inhibiting. Results showed a linear increase in response inhibition performance with increasing age both in relevant and irrelevant affective contexts. Relevant emotions had a pronounced influence on performance across age, whereas irrelevant emotions did not. Overall, participants made more false alarms on trials with fearful faces than happy faces, and happy faces were associated with better performance on go trials (higher percentage correct and faster RTs than fearful faces. The latter effect was stronger for young children in terms of accuracy. Finally, during the free choice trials participants did not base their decisions on affective context, confirming that irrelevant emotions do not have a strong impact on inhibition. Together, these findings suggest that across development relevant affective context has a larger influence on response inhibition than irrelevant affective context. When emotions are relevant, a context of positive emotions is associated with better performance compared to a context with negative emotions, especially in young children.

  4. Overlay accuracy fundamentals

    Science.gov (United States)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  5. On the Accuracy Potential in Underwater/Multimedia Photogrammetry.

    Science.gov (United States)

    Maas, Hans-Gerd

    2015-07-24

    Underwater applications of photogrammetric measurement techniques usually need to deal with multimedia photogrammetry aspects, which are characterized by the necessity of handling optical rays that are refracted at interfaces between optical media with different refractive indices according to Snell's Law. This so-called multimedia geometry has to be incorporated into geometric models in order to achieve correct measurement results. The paper shows a flexible yet strict geometric model for the handling of refraction effects on the optical path, which can be implemented as a module into photogrammetric standard tools such as spatial resection, spatial intersection, bundle adjustment or epipolar line computation. The module is especially well suited for applications, where an object in water is observed by cameras in air through one or more planar glass interfaces, as it allows for some simplifications here. In the second part of the paper, several aspects, which are relevant for an assessment of the accuracy potential in underwater/multimedia photogrammetry, are discussed. These aspects include network geometry and interface planarity issues as well as effects caused by refractive index variations and dispersion and diffusion under water. All these factors contribute to a rather significant degradation of the geometric accuracy potential in underwater/multimedia photogrammetry. In practical experiments, a degradation of the quality of results by a factor two could be determined under relatively favorable conditions.

  6. Identifying and exploiting trait-relevant tissues with multiple functional annotations in genome-wide association studies

    Science.gov (United States)

    Zhang, Shujun

    2018-01-01

    Genome-wide association studies (GWASs) have identified many disease associated loci, the majority of which have unknown biological functions. Understanding the mechanism underlying trait associations requires identifying trait-relevant tissues and investigating associations in a trait-specific fashion. Here, we extend the widely used linear mixed model to incorporate multiple SNP functional annotations from omics studies with GWAS summary statistics to facilitate the identification of trait-relevant tissues, with which to further construct powerful association tests. Specifically, we rely on a generalized estimating equation based algorithm for parameter inference, a mixture modeling framework for trait-tissue relevance classification, and a weighted sequence kernel association test constructed based on the identified trait-relevant tissues for powerful association analysis. We refer to our analytic procedure as the Scalable Multiple Annotation integration for trait-Relevant Tissue identification and usage (SMART). With extensive simulations, we show how our method can make use of multiple complementary annotations to improve the accuracy for identifying trait-relevant tissues. In addition, our procedure allows us to make use of the inferred trait-relevant tissues, for the first time, to construct more powerful SNP set tests. We apply our method for an in-depth analysis of 43 traits from 28 GWASs using tissue-specific annotations in 105 tissues derived from ENCODE and Roadmap. Our results reveal new trait-tissue relevance, pinpoint important annotations that are informative of trait-tissue relationship, and illustrate how we can use the inferred trait-relevant tissues to construct more powerful association tests in the Wellcome trust case control consortium study. PMID:29377896

  7. Identifying and exploiting trait-relevant tissues with multiple functional annotations in genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Xingjie Hao

    2018-01-01

    Full Text Available Genome-wide association studies (GWASs have identified many disease associated loci, the majority of which have unknown biological functions. Understanding the mechanism underlying trait associations requires identifying trait-relevant tissues and investigating associations in a trait-specific fashion. Here, we extend the widely used linear mixed model to incorporate multiple SNP functional annotations from omics studies with GWAS summary statistics to facilitate the identification of trait-relevant tissues, with which to further construct powerful association tests. Specifically, we rely on a generalized estimating equation based algorithm for parameter inference, a mixture modeling framework for trait-tissue relevance classification, and a weighted sequence kernel association test constructed based on the identified trait-relevant tissues for powerful association analysis. We refer to our analytic procedure as the Scalable Multiple Annotation integration for trait-Relevant Tissue identification and usage (SMART. With extensive simulations, we show how our method can make use of multiple complementary annotations to improve the accuracy for identifying trait-relevant tissues. In addition, our procedure allows us to make use of the inferred trait-relevant tissues, for the first time, to construct more powerful SNP set tests. We apply our method for an in-depth analysis of 43 traits from 28 GWASs using tissue-specific annotations in 105 tissues derived from ENCODE and Roadmap. Our results reveal new trait-tissue relevance, pinpoint important annotations that are informative of trait-tissue relationship, and illustrate how we can use the inferred trait-relevant tissues to construct more powerful association tests in the Wellcome trust case control consortium study.

  8. The mathematical model accuracy estimation of the oil storage tank foundation soil moistening

    Science.gov (United States)

    Gildebrandt, M. I.; Ivanov, R. N.; Gruzin, AV; Antropova, L. B.; Kononov, S. A.

    2018-04-01

    The oil storage tanks foundations preparation technologies improvement is the relevant objective which achievement will make possible to reduce the material costs and spent time for the foundation preparing while providing the required operational reliability. The laboratory research revealed the nature of sandy soil layer watering with a given amount of water. The obtained data made possible developing the sandy soil layer moistening mathematical model. The performed estimation of the oil storage tank foundation soil moistening mathematical model accuracy showed the experimental and theoretical results acceptable convergence.

  9. Clinical microbiology informatics.

    Science.gov (United States)

    Rhoads, Daniel D; Sintchenko, Vitali; Rauch, Carol A; Pantanowitz, Liron

    2014-10-01

    The clinical microbiology laboratory has responsibilities ranging from characterizing the causative agent in a patient's infection to helping detect global disease outbreaks. All of these processes are increasingly becoming partnered more intimately with informatics. Effective application of informatics tools can increase the accuracy, timeliness, and completeness of microbiology testing while decreasing the laboratory workload, which can lead to optimized laboratory workflow and decreased costs. Informatics is poised to be increasingly relevant in clinical microbiology, with the advent of total laboratory automation, complex instrument interfaces, electronic health records, clinical decision support tools, and the clinical implementation of microbial genome sequencing. This review discusses the diverse informatics aspects that are relevant to the clinical microbiology laboratory, including the following: the microbiology laboratory information system, decision support tools, expert systems, instrument interfaces, total laboratory automation, telemicrobiology, automated image analysis, nucleic acid sequence databases, electronic reporting of infectious agents to public health agencies, and disease outbreak surveillance. The breadth and utility of informatics tools used in clinical microbiology have made them indispensable to contemporary clinical and laboratory practice. Continued advances in technology and development of these informatics tools will further improve patient and public health care in the future. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  10. Environmental biodosimetry: a biologically relevant tool for ecological risk assessment and biomonitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ulsh, B. E-mail: ulshb@mcmaster.ca; Hinton, T.G.; Congdon, J.D.; Dugan, L.C.; Whicker, F.W.; Bedford, J.S

    2003-07-01

    Biodosimetry, the estimation of received doses by determining the frequency of radiation-induced chromosome aberrations, is widely applied in humans acutely exposed as a result of accidents or for clinical purposes, but biodosimetric techniques have not been utilized in organisms chronically exposed to radionuclides in contaminated environments. The application of biodosimetry to environmental exposure scenarios could greatly improve the accuracy, and reduce the uncertainties, of ecological risk assessments and biomonitoring studies, because no assumptions are required regarding external exposure rates and the movement of organisms into and out of contaminated areas. Furthermore, unlike residue analyses of environmental media environmental biodosimetry provides a genetically relevant biomarker of cumulative lifetime exposure. Symmetrical chromosome translocations can impact reproductive success, and could therefore prove to be ecologically relevant as well. We describe our experience in studying aberrations in the yellow-bellied slider turtle as an example of environmental biodosimetry.

  11. Evaluation of relevant information for optimal reflector modeling through data assimilation procedures

    International Nuclear Information System (INIS)

    Argaud, J.P.; Bouriquet, B.; Clerc, T.; Lucet-Sanchez, F.; Poncot, A.

    2015-01-01

    The goal of this study is to look after the amount of information that is mandatory to get a relevant parameters optimisation by data assimilation for physical models in neutronic diffusion calculations, and to determine what is the best information to reach the optimum of accuracy at the cheapest cost. To evaluate the quality of the optimisation, we study the covariance matrix that represents the accuracy of the optimised parameter. This matrix is a classical output of the data assimilation procedure, and it is the main information about accuracy and sensitivity of the parameter optimal determination. We present some results collected in the field of neutronic simulation for PWR type reactor. We seek to optimise the reflector parameters that characterise the neutronic reflector surrounding the whole reactive core. On the basis of the configuration studies, it has been shown that with data assimilation we can determine a global strategy to optimise the quality of the result with respect to the amount of information provided. The consequence of this is a cost reduction in terms of measurement and/or computing time with respect to the basic approach. Another result is that using multi-campaign data rather data from a unique campaign significantly improves the efficiency of parameters optimisation

  12. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Science.gov (United States)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  13. Ecological Relevance Determines Task Priority in Older Adults' Multitasking.

    Science.gov (United States)

    Doumas, Michail; Krampe, Ralf Th

    2015-05-01

    Multitasking is a challenging aspect of human behavior, especially if the concurrently performed tasks are different in nature. Several studies demonstrated pronounced performance decrements (dual-task costs) in older adults for combinations of cognitive and motor tasks. However, patterns of costs among component tasks differed across studies and reasons for participants' resource allocation strategies remained elusive. We investigated young and older adults' multitasking of a working memory task and two sensorimotor tasks, one with low (finger force control) and one with high ecological relevance (postural control). The tasks were performed in single-, dual-, and triple-task contexts. Working memory accuracy was reduced in dual-task contexts with either sensorimotor task and deteriorated further under triple-task conditions. Postural and force performance deteriorated with age and task difficulty in dual-task contexts. However, in the triple-task context with its maximum resource demands, older adults prioritized postural control over both force control and memory. Our results identify ecological relevance as the key factor in older adults' multitasking. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    Directory of Open Access Journals (Sweden)

    Francisco J Valverde-Albacete

    Full Text Available The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA, a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT, a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  15. The Quality and Accuracy of Mobile Apps to Prevent Driving After Drinking Alcohol.

    Science.gov (United States)

    Wilson, Hollie; Stoyanov, Stoyan R; Gandabhai, Shailen; Baldwin, Alexander

    2016-08-08

    Driving after the consumption of alcohol represents a significant problem globally. Individual prevention countermeasures such as personalized mobile app aimed at preventing such behavior are widespread, but there is little research on their accuracy and evidence base. There has been no known assessment investigating the quality of such apps. This study aimed to determine the quality and accuracy of apps for drink driving prevention by conducting a review and evaluation of relevant mobile apps. A systematic app search was conducted following PRISMA guidelines. App quality was assessed using the Mobile App Rating Scale (MARS). Apps providing blood alcohol calculators (hereafter "calculators") were reviewed against current alcohol advice for accuracy. A total of 58 apps (30 iOS and 28 Android) met inclusion criteria and were included in the final analysis. Drink driving prevention apps had significantly lower engagement and overall quality scores than alcohol management apps. Most calculators provided conservative blood alcohol content (BAC) time until sober calculations. None of the apps had been evaluated to determine their efficacy in changing either drinking or driving behaviors. This novel study demonstrates that most drink driving prevention apps are not engaging and lack accuracy. They could be improved by increasing engagement features, such as gamification. Further research should examine the context and motivations for using apps to prevent driving after drinking in at-risk populations. Development of drink driving prevention apps should incorporate evidence-based information and guidance, lacking in current apps.

  16. Diagnosing Eyewitness Accuracy

    OpenAIRE

    Russ, Andrew

    2015-01-01

    Eyewitnesses frequently mistake innocent people for the perpetrator of an observed crime. Such misidentifications have led to the wrongful convictions of many people. Despite this, no reliable method yet exists to determine eyewitness accuracy. This thesis explored two new experimental methods for this purpose. Chapter 2 investigated whether repetition priming can measure prior exposure to a target and compared this with observers’ explicit eyewitness accuracy. Across three experiments slower...

  17. T-ray relevant frequencies for osteosarcoma classification

    Science.gov (United States)

    Withayachumnankul, W.; Ferguson, B.; Rainsford, T.; Findlay, D.; Mickan, S. P.; Abbott, D.

    2006-01-01

    We investigate the classification of the T-ray response of normal human bone cells and human osteosarcoma cells, grown in culture. Given the magnitude and phase responses within a reliable spectral range as features for input vectors, a trained support vector machine can correctly classify the two cell types to some extent. Performance of the support vector machine is deteriorated by the curse of dimensionality, resulting from the comparatively large number of features in the input vectors. Feature subset selection methods are used to select only an optimal number of relevant features for inputs. As a result, an improvement in generalization performance is attainable, and the selected frequencies can be used for further describing different mechanisms of the cells, responding to T-rays. We demonstrate a consistent classification accuracy of 89.6%, while the only one fifth of the original features are retained in the data set.

  18. Decision aids for improved accuracy and standardization of mammographic diagnosis

    International Nuclear Information System (INIS)

    D'Orsi, C.J.; Getty, D.J.; Swets, J.A.; Pickett, R.M.; Seltzer, S.E.; McNeil, B.J.

    1990-01-01

    This paper examines the gains in the accuracy of mammographic diagnosis of breast cancer achievable from a pair of decision aids. Twenty-three potentially relevant perceptual features of mammograms were identified through interviews, psychometric tests, and consensus meetings with mammography specialists. Statistical analyses determined the 12 independent features that were most information diagnostically and assigned a weight to each according to its importance. Two decision aids were developed: a checklist that solicits a scale value from the radiologist for each feature and a computer program that merges those values optimally in an advisory estimate of the probability of malignancy. Six radiologists read a set of 150 cases, first in their usual way and later with the aids

  19. Trait Perception Accuracy and Acquaintance Within Groups: Tracking Accuracy Development.

    Science.gov (United States)

    Brown, Jill A; Bernieri, Frank

    2017-05-01

    Previous work on trait perception has evaluated accuracy at discrete stages of relationships (e.g., strangers, best friends). A relatively limited body of literature has investigated changes in accuracy as acquaintance within a dyad or group increases. Small groups of initially unacquainted individuals spent more than 30 hr participating in a wide range of activities designed to represent common interpersonal contexts (e.g., eating, traveling). We calculated how accurately each participant judged others in their group on the big five traits across three distinct points within the acquaintance process: zero acquaintance, after a getting-to-know-you conversation, and after 10 weeks of interaction and activity. Judgments of all five traits exhibited accuracy above chance levels after 10 weeks. An examination of the trait rating stability revealed that much of the revision in judgments occurred not over the course of the 10-week relationship as suspected, but between zero acquaintance and the getting-to-know-you conversation.

  20. Validating continuous digital light processing (cDLP) additive manufacturing accuracy and tissue engineering utility of a dye-initiator package

    International Nuclear Information System (INIS)

    Wallace, Jonathan; Wang, Martha O; Kim, Kyobum

    2014-01-01

    This study tested the accuracy of tissue engineering scaffold rendering via the continuous digital light processing (cDLP) light-based additive manufacturing technology. High accuracy (i.e., <50 µm) allows the designed performance of features relevant to three scale spaces: cell-scaffold, scaffold-tissue, and tissue-organ interactions. The biodegradable polymer poly (propylene fumarate) was used to render highly accurate scaffolds through the use of a dye-initiator package, TiO 2  and bis (2,4,6-trimethylbenzoyl)phenylphosphine oxide. This dye-initiator package facilitates high accuracy in the Z dimension. Linear, round, and right-angle features were measured to gauge accuracy. Most features showed accuracies between 5.4–15% of the design. However, one feature, an 800 µm diameter circular pore, exhibited a 35.7% average reduction of patency. Light scattered in the x, y directions by the dye may have reduced this feature's accuracy. Our new fine-grained understanding of accuracy could be used to make further improvements by including corrections in the scaffold design software. Successful cell attachment occurred with both canine and human mesenchymal stem cells (MSCs). Highly accurate cDLP scaffold rendering is critical to the design of scaffolds that both guide bone regeneration and that fully resorb. Scaffold resorption must occur for regenerated bone to be remodeled and, thereby, achieve optimal strength. (paper)

  1. Validating continuous digital light processing (cDLP) additive manufacturing accuracy and tissue engineering utility of a dye-initiator package.

    Science.gov (United States)

    Wallace, Jonathan; Wang, Martha O; Thompson, Paul; Busso, Mallory; Belle, Vaijayantee; Mammoser, Nicole; Kim, Kyobum; Fisher, John P; Siblani, Ali; Xu, Yueshuo; Welter, Jean F; Lennon, Donald P; Sun, Jiayang; Caplan, Arnold I; Dean, David

    2014-03-01

    This study tested the accuracy of tissue engineering scaffold rendering via the continuous digital light processing (cDLP) light-based additive manufacturing technology. High accuracy (i.e., <50 µm) allows the designed performance of features relevant to three scale spaces: cell-scaffold, scaffold-tissue, and tissue-organ interactions. The biodegradable polymer poly (propylene fumarate) was used to render highly accurate scaffolds through the use of a dye-initiator package, TiO2 and bis (2,4,6-trimethylbenzoyl)phenylphosphine oxide. This dye-initiator package facilitates high accuracy in the Z dimension. Linear, round, and right-angle features were measured to gauge accuracy. Most features showed accuracies between 5.4-15% of the design. However, one feature, an 800 µm diameter circular pore, exhibited a 35.7% average reduction of patency. Light scattered in the x, y directions by the dye may have reduced this feature's accuracy. Our new fine-grained understanding of accuracy could be used to make further improvements by including corrections in the scaffold design software. Successful cell attachment occurred with both canine and human mesenchymal stem cells (MSCs). Highly accurate cDLP scaffold rendering is critical to the design of scaffolds that both guide bone regeneration and that fully resorb. Scaffold resorption must occur for regenerated bone to be remodeled and, thereby, achieve optimal strength.

  2. Serum albumin: accuracy and clinical use.

    Science.gov (United States)

    Infusino, Ilenia; Panteghini, Mauro

    2013-04-18

    Albumin is the major plasma protein and its determination is used for the prognostic assessment of several diseases. Clinical guidelines call for monitoring of serum albumin with specific target cut-offs that are independent of the assay used. This requires accurate and equivalent results among different commercially available methods (i.e., result standardization) through a consistent definition and application of a reference measurement system. This should be associated with the definition of measurement uncertainty goals based on medical relevance of serum albumin to make results reliable for patient management. In this paper, we show that, in the current situation, if one applies analytical goals for serum albumin measurement derived from its biologic variation, the uncertainty budget derived from each step of the albumin traceability chain is probably too high to fulfil established quality levels for albumin measurement and to guarantee the accuracy needed for clinical usefulness of the test. The situation is further worsened if non-specific colorimetric methods are used for albumin measurement as they represent an additional random source of uncertainty. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Relevance vector machine technique for the inverse scattering problem

    International Nuclear Information System (INIS)

    Wang Fang-Fang; Zhang Ye-Rong

    2012-01-01

    A novel method based on the relevance vector machine (RVM) for the inverse scattering problem is presented in this paper. The nonlinearity and the ill-posedness inherent in this problem are simultaneously considered. The nonlinearity is embodied in the relation between the scattered field and the target property, which can be obtained through the RVM training process. Besides, rather than utilizing regularization, the ill-posed nature of the inversion is naturally accounted for because the RVM can produce a probabilistic output. Simulation results reveal that the proposed RVM-based approach can provide comparative performances in terms of accuracy, convergence, robustness, generalization, and improved performance in terms of sparse property in comparison with the support vector machine (SVM) based approach. (general)

  4. Impact of Preparers of Accounting Information on Quality of Financial Reporting in Malaysia(*)

    OpenAIRE

    Dandago, Prof. Dr.Kabiru Isa; Edem, Akpan; Tsafe, Dr. Bashir Mande

    2014-01-01

    The primary objective of accounting is to provide information that is useful for decision making purposes. Accounting information that makes information provided useful to users in making economic decisions must possess the following qualities: relevance, reliability, comparability, understandability, neutrality, timeliness and materiality. This paper investigates the factors influencing preparers’ decision to prepare accounting information following the financial reporting rules and regulati...

  5. Simulation assessment center in the service of the company as a factor in the accuracy and validity of the information about the employee

    OpenAIRE

    Borodai V.A.

    2017-01-01

    The article reveals the relevance of evaluation method for personnel assessment center technologies. The efficiency of the method in terms of accuracy and validity of the assessment of employees. Identified positive factors and problematic use of assessment center technology service company/

  6. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making.

    Directory of Open Access Journals (Sweden)

    Ricardo Lopes Cardoso

    Full Text Available Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295 of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.

  7. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making.

    Science.gov (United States)

    Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli

    2016-01-01

    Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.

  8. Accuracy and Efficiency of a Coupled Neutronics and Thermal Hydraulics Model

    International Nuclear Information System (INIS)

    Pope, Michael A.; Mousseau, Vincent A.

    2009-01-01

    The accuracy requirements for modern nuclear reactor simulation are steadily increasing due to the cost and regulation of relevant experimental facilities. Because of the increase in the cost of experiments and the decrease in the cost of simulation, simulation will play a much larger role in the design and licensing of new nuclear reactors. Fortunately as the work load of simulation increases, there are better physics models, new numerical techniques, and more powerful computer hardware that will enable modern simulation codes to handle this larger workload. This manuscript will discuss a numerical method where the six equations of two-phase flow, the solid conduction equations, and the two equations that describe neutron diffusion and precursor concentration are solved together in a tightly coupled, nonlinear fashion for a simplified model of a nuclear reactor core. This approach has two important advantages. The first advantage is a higher level of accuracy. Because the equations are solved together in a single nonlinear system, the solution is more accurate than the traditional 'operator split' approach where the two-phase flow equations are solved first, the heat conduction is solved second and the neutron diffusion is solved third, limiting the temporal accuracy to 1st order because the nonlinear coupling between the physics is handled explicitly. The second advantage of the method described in this manuscript is that the time step control in the fully implicit system can be based on the timescale of the solution rather than a stability-based time step restriction like the material Courant. Results are presented from a simulated control rod movement and a rod ejection that address temporal accuracy for the fully coupled solution and demonstrate how the fastest timescale of the problem can change between the state variables of neutronics, conduction and two-phase flow during the course of a transient.

  9. Accuracy and Efficiency of a Coupled Neutronics and Thermal Hydraulics Model

    International Nuclear Information System (INIS)

    Vincent A. Mousseau; Michael A. Pope

    2007-01-01

    The accuracy requirements for modern nuclear reactor simulation are steadily increasing due to the cost and regulation of relevant experimental facilities. Because of the increase in the cost of experiments and the decrease in the cost of simulation, simulation will play a much larger role in the design and licensing of new nuclear reactors. Fortunately as the work load of simulation increases, there are better physics models, new numerical techniques, and more powerful computer hardware that will enable modern simulation codes to handle the larger workload. This manuscript will discuss a numerical method where the six equations of two-phase flow, the solid conduction equations, and the two equations that describe neutron diffusion and precursor concentration are solved together in a tightly coupled, nonlinear fashion for a simplified model of a nuclear reactor core. This approach has two important advantages. The first advantage is a higher level of accuracy. Because the equations are solved together in a single nonlinear system, the solution is more accurate than the traditional 'operator split' approach where the two-phase flow equations are solved first, the heat conduction is solved second and the neutron diffusion is solved third, limiting the temporal accuracy to 1st order because the nonlinear coupling between the physics is handled explicitly. The second advantage of the method described in this manuscript is that the time step control in the fully implicit system can be based on the timescale of the solution rather than a stability-based time step restriction like the material Courant. Results are presented from a simulated control rod movement and a rod ejection that address temporal accuracy for the fully coupled solution and demonstrate how the fastest timescale of the problem can change between the state variables of neutronics, conduction and two-phase flow during the course of a transient

  10. Social Power Increases Interoceptive Accuracy

    Directory of Open Access Journals (Sweden)

    Mehrad Moeini-Jazani

    2017-08-01

    Full Text Available Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research.

  11. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection

    Directory of Open Access Journals (Sweden)

    Xin Ma

    2015-01-01

    Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  12. Identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis by using the Delphi Technique

    Science.gov (United States)

    Halim, N. Z. A.; Sulaiman, S. A.; Talib, K.; Ng, E. G.

    2018-02-01

    This paper explains the process carried out in identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis. The research was initially a part of a larger research exercise to identify the significance of NDCDB from the legal, technical, role and land-based analysis perspectives. The research methodology of applying the Delphi technique is substantially discussed in this paper. A heterogeneous panel of 14 experts was created to determine the importance of NDCDB from the technical relevance standpoint. Three statements describing the relevant features of NDCDB for spatial analysis were established after three rounds of consensus building. It highlighted the NDCDB’s characteristics such as its spatial accuracy, functions, and criteria as a facilitating tool for spatial analysis. By recognising the relevant features of NDCDB for spatial analysis in this study, practical application of NDCDB for various analysis and purpose can be widely implemented.

  13. Application of all relevant feature selection for failure analysis of parameter-induced simulation crashes in climate models

    Science.gov (United States)

    Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.

    2015-07-01

    The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.

  14. Joint modeling of genetically correlated diseases and functional annotations increases accuracy of polygenic risk prediction.

    Directory of Open Access Journals (Sweden)

    Yiming Hu

    2017-06-01

    Full Text Available Accurate prediction of disease risk based on genetic factors is an important goal in human genetics research and precision medicine. Advanced prediction models will lead to more effective disease prevention and treatment strategies. Despite the identification of thousands of disease-associated genetic variants through genome-wide association studies (GWAS in the past decade, accuracy of genetic risk prediction remains moderate for most diseases, which is largely due to the challenges in both identifying all the functionally relevant variants and accurately estimating their effect sizes. In this work, we introduce PleioPred, a principled framework that leverages pleiotropy and functional annotations in genetic risk prediction for complex diseases. PleioPred uses GWAS summary statistics as its input, and jointly models multiple genetically correlated diseases and a variety of external information including linkage disequilibrium and diverse functional annotations to increase the accuracy of risk prediction. Through comprehensive simulations and real data analyses on Crohn's disease, celiac disease and type-II diabetes, we demonstrate that our approach can substantially increase the accuracy of polygenic risk prediction and risk population stratification, i.e. PleioPred can significantly better separate type-II diabetes patients with early and late onset ages, illustrating its potential clinical application. Furthermore, we show that the increment in prediction accuracy is significantly correlated with the genetic correlation between the predicted and jointly modeled diseases.

  15. Linear accuracy and reliability of volume data sets acquired by two CBCT-devices and an MSCT using virtual models : A comparative in-vitro study

    NARCIS (Netherlands)

    Wikner, Johannes; Hanken, Henning; Eulenburg, Christine; Heiland, Max; Groebe, Alexander; Assaf, Alexandre Thomas; Riecke, Bjoern; Friedrich, Reinhard E.

    2016-01-01

    Objective. To discriminate clinically relevant aberrance, the accuracy of linear measurements in three-dimensional (3D) reconstructed datasets was investigated. Materials and methods. Three partly edentulous human skulls were examined. Landmarks were defined prior to acquisition. Two CBCT-scanners

  16. A laboratory assessment of the measurement accuracy of weighing type rainfall intensity gauges

    Science.gov (United States)

    Colli, M.; Chan, P. W.; Lanza, L. G.; La Barbera, P.

    2012-04-01

    In recent years the WMO Commission for Instruments and Methods of Observation (CIMO) fostered noticeable advancements in the accuracy of precipitation measurement issue by providing recommendations on the standardization of equipment and exposure, instrument calibration and data correction as a consequence of various comparative campaigns involving manufacturers and national meteorological services from the participating countries (Lanza et al., 2005; Vuerich et al., 2009). Extreme events analysis is proven to be highly affected by the on-site RI measurement accuracy (see e.g. Molini et al., 2004) and the time resolution of the available RI series certainly constitutes another key-factor in constructing hyetographs that are representative of real rain events. The OTT Pluvio2 weighing gauge (WG) and the GEONOR T-200 vibrating-wire precipitation gauge demonstrated very good performance under previous constant flow rate calibration efforts (Lanza et al., 2005). Although WGs do provide better performance than more traditional Tipping Bucket Rain gauges (TBR) under continuous and constant reference intensity, dynamic effects seem to affect the accuracy of WG measurements under real world/time varying rainfall conditions (Vuerich et al., 2009). The most relevant is due to the response time of the acquisition system and the derived systematic delay of the instrument in assessing the exact weight of the bin containing cumulated precipitation. This delay assumes a relevant role in case high resolution rain intensity time series are sought from the instrument, as is the case of many hydrologic and meteo-climatic applications. This work reports the laboratory evaluation of Pluvio2 and T-200 rainfall intensity measurements accuracy. Tests are carried out by simulating different artificial precipitation events, namely non-stationary rainfall intensity, using a highly accurate dynamic rainfall generator. Time series measured by an Ogawa drop counter (DC) at a field test site

  17. Requisite accuracy for hot spot factors in fast reactors

    International Nuclear Information System (INIS)

    Miki, Kazuyoshi; Inoue, Kotaro

    1976-01-01

    In the thermal design of a fast reactor, it should be most effective to reduce hot spot factors to the lowest possible level compatible with safety considerations, in order to minimize the design margin for the temperature prevailing in the core. Hot spot factors account for probabilistic and statistic deviations from nominal value of fuel element temperatures, due to uncertainties in the data adopted for estimating various factors including the physical properties. Such temperature deviations necessitate the provision of correspondingly large design margins for temperatures in order to keep within permissible limits the probability of exceeding the allowable temperatures. Evaluation of the desired accuracy for hot spot factors is performed by a method of optimization, which permits determination of the degree of accuracy that should minimize the design margins, to give realistic results with consideration given not only to sensitivity coefficients but also to the present-day uncertainty levels in the data adopted in the calculations. A concept of ''degree of difficulty'' is introduced for the purpose of determining the hot spot factors to be given higher priority for reduction. Application of this method to the core of a prototype fast reactor leads to the conclusion that the hot spot factors to be given the highest priority are those relevant to the power distribution, the flow distribution, the fuel enrichment, the fuel-cladding gap conductance and the fuel thermal conductivity. (auth.)

  18. Integration of genomic information into sport horse breeding programs for optimization of accuracy of selection.

    Science.gov (United States)

    Haberland, A M; König von Borstel, U; Simianer, H; König, S

    2012-09-01

    Reliable selection criteria are required for young riding horses to increase genetic gain by increasing accuracy of selection and decreasing generation intervals. In this study, selection strategies incorporating genomic breeding values (GEBVs) were evaluated. Relevant stages of selection in sport horse breeding programs were analyzed by applying selection index theory. Results in terms of accuracies of indices (r(TI) ) and relative selection response indicated that information on single nucleotide polymorphism (SNP) genotypes considerably increases the accuracy of breeding values estimated for young horses without own or progeny performance. In a first scenario, the correlation between the breeding value estimated from the SNP genotype and the true breeding value (= accuracy of GEBV) was fixed to a relatively low value of r(mg) = 0.5. For a low heritability trait (h(2) = 0.15), and an index for a young horse based only on information from both parents, additional genomic information doubles r(TI) from 0.27 to 0.54. Including the conventional information source 'own performance' into the before mentioned index, additional SNP information increases r(TI) by 40%. Thus, particularly with regard to traits of low heritability, genomic information can provide a tool for well-founded selection decisions early in life. In a further approach, different sources of breeding values (e.g. GEBV and estimated breeding values (EBVs) from different countries) were combined into an overall index when altering accuracies of EBVs and correlations between traits. In summary, we showed that genomic selection strategies have the potential to contribute to a substantial reduction in generation intervals in horse breeding programs.

  19. Systematic review of discharge coding accuracy

    Science.gov (United States)

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  20. Diagnostic relevance of high field MRI in clinical neuroradiology: the advantages and challenges of driving a sports car

    International Nuclear Information System (INIS)

    Wattjes, Mike P.; Barkhof, Frederik

    2012-01-01

    High field MRI operating at 3 T is increasingly being used in the field of neuroradiology on the grounds that higher magnetic field strength should theoretically lead to a higher diagnostic accuracy in the diagnosis of several disease entities. This Editorial discusses the exhaustive review by Wardlaw and colleagues of research comparing 3 T MRI with 1.5 T MRI in the field of neuroradiology. Interestingly, the authors found no convincing evidence of improved image quality, diagnostic accuracy, or reduced total examination times using 3 T MRI instead of 1.5 T MRI. These findings are highly relevant since a new generation of high field MRI systems operating at 7 T has recently been introduced. (orig.)

  1. Systematic Review of the Diagnostic Accuracy and Therapeutic Effectiveness of Sacroiliac Joint Interventions.

    Science.gov (United States)

    Simopoulos, Thomas T; Manchikanti, Laxmaiah; Gupta, Sanjeeva; Aydin, Steve M; Kim, Chong Hwan; Solanki, Daneshvari; Nampiaparampil, Devi E; Singh, Vijay; Staats, Peter S; Hirsch, Joshua A

    2015-01-01

    The sacroiliac joint is well known as a cause of low back and lower extremity pain. Prevalence estimates are 10% to 25% in patients with persistent axial low back pain without disc herniation, discogenic pain, or radiculitis based on multiple diagnostic studies and systematic reviews. However, at present there are no definitive management options for treating sacroiliac joint pain. To evaluate the diagnostic accuracy and therapeutic effectiveness of sacroiliac joint interventions. A systematic review of the diagnostic accuracy and therapeutic effectiveness of sacroiliac joint interventions. The available literature on diagnostic and therapeutic sacroiliac joint interventions was reviewed. The quality assessment criteria utilized were the Quality Appraisal of Reliability Studies (QAREL) checklist for diagnostic accuracy studies, Cochrane review criteria to assess sources of risk of bias, and Interventional Pain Management Techniques-Quality Appraisal of Reliability and Risk of Bias Assessment (IPM-QRB) criteria for randomized therapeutic trials and Interventional Pain Management Techniques-Quality Appraisal of Reliability and Risk of Bias Assessment for Nonrandomized Studies (IPM-QRBNR) for observational therapeutic assessments. The level of evidence was based on a best evidence synthesis with modified grading of qualitative evidence from Level I to Level V. Data sources included relevant literature published from 1966 through March 2015 that were identified through searches of PubMed and EMBASE, manual searches of the bibliographies of known primary and review articles, and all other sources. For the diagnostic accuracy assessment, and for the therapeutic modalities, the primary outcome measure of pain relief and improvement in functional status were utilized. A total of 11 diagnostic accuracy studies and 14 therapeutic studies were included. The evidence for diagnostic accuracy is Level II for dual diagnostic blocks with at least 70% pain relief as the criterion

  2. Nuclear data for fission reactor core design and safety analysis: Requirements and status of accuracy of nuclear data

    International Nuclear Information System (INIS)

    Rowlands, J.L.

    1984-01-01

    The types of nuclear data required for fission reactor design and safety analysis, and the ways in which the data are represented and approximated for use in reactor calculations, are summarised first. The relative importance of different items of nuclear data in the prediction of reactor parameters is described and ways of investigating the accuracy of these data by evaluating related integral measurements are discussed. The use of sensitivity analysis, together with estimates of the uncertainties in nuclear data and relevant integral measurements, in assessing the accuracy of prediction of reactor parameters is described. The inverse procedure for deciding nuclear data requirements from the target accuracies for prediction of reactor parameters follows on from this. The need for assessments of the uncertainties in nuclear data evaluations and the form of the uncertainty information is discussed. The status of the accuracies of predictions and nuclear data requirements are then summarised. The reactor parameters considered include: (a) Criticality conditions, conversion and burn-up effects. (b) Energy production and deposition, decay heating, irradiation damage, dosimetry and induced radioactivity. (c) Kinetics characteristics and control, including temperature, power and coolant density coefficients, delayed neutrons and control absorbers. (author)

  3. Data accuracy assessment using enterprise architecture

    Science.gov (United States)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  4. Accuracy required and achievable in radiotherapy dosimetry: have modern technology and techniques changed our views?

    Science.gov (United States)

    Thwaites, David

    2013-06-01

    In this review of the accuracy required and achievable in radiotherapy dosimetry, older approaches and evidence-based estimates for 3DCRT have been reprised, summarising and drawing together the author's earlier evaluations where still relevant. Available evidence for IMRT uncertainties has been reviewed, selecting information from tolerances, QA, verification measurements, in vivo dosimetry and dose delivery audits, to consider whether achievable uncertainties increase or decrease for current advanced treatments and practice. Overall there is some evidence that they tend to increase, but that similar levels should be achievable. Thus it is concluded that those earlier estimates of achievable dosimetric accuracy are still applicable, despite the changes and advances in technology and techniques. The one exception is where there is significant lung involvement, where it is likely that uncertainties have now improved due to widespread use of more accurate heterogeneity models. Geometric uncertainties have improved with the wide availability of IGRT.

  5. Evaluation of accuracy of intra operative imprint cytology for detection of breast lesions

    International Nuclear Information System (INIS)

    Mahmood, Z.; Shahbaz, A.; Qureshi, A.; Aziz, N.; Niazi, S.; Qureshi, S.; Bukhari, M.H.

    2010-01-01

    Objective: To determine the accuracy of imprint cytology as an intraoperative diagnostic procedure for breast lesions with histopathological correlation. Materials and Methods: This was a descriptive study on 40 cases of breast lesions comprising of inflammatory, benign and malignant lesions including their margins etc. It was conducted at King Edward Medical University, Lahore in collaboration with all Surgical Departments of Mayo Hospital. Relevant clinical data was recorded in a proforma. Both touch and scrape imprints were prepared from all the lesions and stained with May-Grunwaled Giemsa and Haematoxylin and Eosin stains. The imprints were subsequently compared with histopathology sections. Results: When we used atypical cases as negative both touch and scrape imprints gave sensitivity, specificity, positive predictive value, negative predictive value and accuracy at 100%. However when we used cases with atypia as positive, sensitivity and negative predictive value were 100% with both touch and scrape imprints. Specificity, positive predictive value and accuracy were 71%, 86%, 85.5% respectively with touch imprints and 78%, 89%, 89% respectively with scrape imprints. No diagnostic difference was noted between the results of both stains. All the imprints were well correlated with histopathological diagnosis. Conclusion: Imprint cytology is an accurate and simple intraoperative method for diagnosing breast lesions. It can provide the surgeons with information regarding immediate clinical and surgical interventions. (author)

  6. Electronic apex locator: A comprehensive literature review — Part II: Effect of different clinical and technical conditions on electronic apex locator′s accuracy

    Directory of Open Access Journals (Sweden)

    Hamid Razavian

    2014-01-01

    Full Text Available Introduction: To investigate the effects of different clinical and technical conditions on the accuracy of electronic apex locators (EALs. Materials and Methods: "Tooth apex," "dental instrument," "odontometry," "electronic medical," and "electronic apex locator" were searched as primary identifiers via Medline/PubMed, Cochrane library, and Scopus data base up to 30 July 2013. Original articles that fulfilled the inclusion criteria were selected and reviewed. Results: Out of 402 relevant studies, 183 were selected based on the inclusion criteria. In this part, 75 studies are presented. Pulp vitality conditions and root resorption, types of files and irrigating materials do not affect an EAL′s accuracy; however, the file size and foramen diameter can affect its accuracy. Conclusions: Various clinical conditions such as the file size and foramen diameter may affect EALs′ accuracy. However, more randomized clinical trials are needed for definitive conclusion.

  7. Improving shuffler assay accuracy

    International Nuclear Information System (INIS)

    Rinard, P.M.

    1995-01-01

    Drums of uranium waste should be disposed of in an economical and environmentally sound manner. The most accurate possible assays of the uranium masses in the drums are required for proper disposal. The accuracies of assays from a shuffler are affected by the type of matrix material in the drums. Non-hydrogenous matrices have little effect on neutron transport and accuracies are very good. If self-shielding is known to be a minor problem, good accuracies are also obtained with hydrogenous matrices when a polyethylene sleeve is placed around the drums. But for those cases where self-shielding may be a problem, matrices are hydrogenous, and uranium distributions are non-uniform throughout the drums, the accuracies are degraded. They can be greatly improved by determining the distributions of the uranium and then applying correction factors based on the distributions. This paper describes a technique for determining uranium distributions by using the neutron count rates in detector banks around the waste drum and solving a set of overdetermined linear equations. Other approaches were studied to determine the distributions and are described briefly. Implementation of this correction is anticipated on an existing shuffler next year

  8. The control of translational accuracy is a determinant of healthy ageing in yeast.

    Science.gov (United States)

    von der Haar, Tobias; Leadsham, Jane E; Sauvadet, Aimie; Tarrant, Daniel; Adam, Ilectra S; Saromi, Kofo; Laun, Peter; Rinnerthaler, Mark; Breitenbach-Koller, Hannelore; Breitenbach, Michael; Tuite, Mick F; Gourlay, Campbell W

    2017-01-01

    Life requires the maintenance of molecular function in the face of stochastic processes that tend to adversely affect macromolecular integrity. This is particularly relevant during ageing, as many cellular functions decline with age, including growth, mitochondrial function and energy metabolism. Protein synthesis must deliver functional proteins at all times, implying that the effects of protein synthesis errors like amino acid misincorporation and stop-codon read-through must be minimized during ageing. Here we show that loss of translational accuracy accelerates the loss of viability in stationary phase yeast. Since reduced translational accuracy also reduces the folding competence of at least some proteins, we hypothesize that negative interactions between translational errors and age-related protein damage together overwhelm the cellular chaperone network. We further show that multiple cellular signalling networks control basal error rates in yeast cells, including a ROS signal controlled by mitochondrial activity, and the Ras pathway. Together, our findings indicate that signalling pathways regulating growth, protein homeostasis and energy metabolism may jointly safeguard accurate protein synthesis during healthy ageing. © 2017 The Authors.

  9. Investigation on dimensional accuracy and mechanical properties of cylindrical parts by flow forming

    Directory of Open Access Journals (Sweden)

    Xiao Gangfeng

    2015-01-01

    Full Text Available The high dimensional accuracy and excellent mechanical properties have become two most important requirements for structural components. In this paper, experiments using two spinning methods, stagger spinning and counter-roller spinning, were carried out under different thinning ratio of wall thickness of spun parts. The influence of spinning methods and total thinning ratio of wall thickness on the dimensional accuracy and mechanical properties of the!spun parts were studied. It shows that the wall thickness deviation and ovality of the spun parts are closely related to the spinning method and the total thinning ratio of wall thickness. The hardness of the spun parts increases with the increasing of the total thinning ratio, and the hardness along the thickness direction of the spun parts manufactured by counter-roller spinning is more homogeneous than that of the stagger spinning. The strength and the elongation of the spun parts are mainly influenced by the total thinning ratio, with little relevance to the spinning method.

  10. Relevance of nonlinear effects of uncertainties in the input data on the calculational results

    International Nuclear Information System (INIS)

    Carvalho da Silva, F.; D'Angelo, A.; Gandini, A.; Rado, V.

    1982-01-01

    The second order sensitivity analysis relevant to neutron activations at the end of Fe and Na blocks shows that the discrepancy between the values obtained from the direct calculation and those which take into account the inaccuracy of the input data (average values) can be significant in cases of interest. It has been observed that, for a threshold detector response after a penetration larger than 50 cm in Fe blocks and 100 cm in Na blocks, the magnitude of this discrepancy (from 50% up to 100% of standard deviation) leads to the necessity of improving the existing accuracy of the inelastic cross-sections of Fe and Na. Moreover, the above discrepancy has been evaluated in terms of project parameters relevant to a power fast fission reactor, in particular, the Fe-displacement rate in the Fe/Na shield region and the Na-activation rate in the heat exchanger. (author)

  11. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910

  12. Hybrid Brain-Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review.

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  13. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Directory of Open Access Journals (Sweden)

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  14. Application of Multilayer Perceptron with Automatic Relevance Determination on Weed Mapping Using UAV Multispectral Imagery.

    Science.gov (United States)

    Tamouridou, Afroditi A; Alexandridis, Thomas K; Pantazi, Xanthoula E; Lagopodi, Anastasia L; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios

    2017-10-11

    Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery.

  15. Geoid undulation accuracy

    Science.gov (United States)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  16. Feature relevance assessment for the semantic interpretation of 3D point cloud data

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2013-10-01

    Full Text Available The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.

  17. User perspectives on relevance criteria

    DEFF Research Database (Denmark)

    Maglaughlin, Kelly L.; Sonnenwald, Diane H.

    2002-01-01

    , partially relevant, or not relevant to their information need; and explained their decisions in an interview. Analysis revealed 29 criteria, discussed positively and negatively, that were used by the participants when selecting passages that contributed or detracted from a document's relevance......This study investigates the use of criteria to assess relevant, partially relevant, and not-relevant documents. Study participants identified passages within 20 document representations that they used to make relevance judgments; judged each document representation as a whole to be relevant...... matter, thought catalyst), full text (e.g., audience, novelty, type, possible content, utility), journal/publisher (e.g., novelty, main focus, perceived quality), and personal (e.g., competition, time requirements). Results further indicate that multiple criteria are used when making relevant, partially...

  18. Conditional Dependence between Response Time and Accuracy: An Overview of its Possible Sources and Directions for Distinguishing between Them

    Science.gov (United States)

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan; De Boeck, Paul

    2017-01-01

    With the widespread use of computerized tests in educational measurement and cognitive psychology, registration of response times has become feasible in many applications. Considering these response times helps provide a more complete picture of the performance and characteristics of persons beyond what is available based on response accuracy alone. Statistical models such as the hierarchical model (van der Linden, 2007) have been proposed that jointly model response time and accuracy. However, these models make restrictive assumptions about the response processes (RPs) that may not be realistic in practice, such as the assumption that the association between response time and accuracy is fully explained by taking speed and ability into account (conditional independence). Assuming conditional independence forces one to ignore that many relevant individual differences may play a role in the RPs beyond overall speed and ability. In this paper, we critically consider the assumption of conditional independence and the important ways in which it may be violated in practice from a substantive perspective. We consider both conditional dependences that may arise when all persons attempt to solve the items in similar ways (homogeneous RPs) and those that may be due to persons differing in fundamental ways in how they deal with the items (heterogeneous processes). The paper provides an overview of what we can learn from observed conditional dependences. We argue that explaining and modeling these differences in the RPs is crucial to increase both the validity of measurement and our understanding of the relevant RPs. PMID:28261136

  19. Status self-validation of a multifunctional sensor using a multivariate relevance vector machine and predictive filters

    International Nuclear Information System (INIS)

    Shen, Zhengguang; Wang, Qi

    2013-01-01

    A novel strategy by using a multivariable relevance vector machine coupled with predictive filters for status self-validation of a multifunctional sensor is proposed. The working principle and online updating algorithm of predictive filters are emphasized for multiple fault detection, isolation and recovery (FDIR), and the incorrect sensor measurements are validated online. The multivariable relevance vector machine is then employed for the signal reconstruction of the multifunctional sensor to generate the final validated measurement values (VMV) of multiple measured components, in which its advantages of sparse models and multivariable simultaneous outputs are fully used. With all likely uncertainty sources of the multifunctional self-validating sensor taken into account, the uncertainty propagation model is deduced in detail to evaluate the online validated uncertainty (VU) under a fault-free situation while a qualitative uncertainty component is appended to indicate the accuracy changes of VMV under different types of fault. A real experimental system of a multifunctional self-validating sensor is designed to verify the performance of the proposed strategy. From the real-time capacity and fault recovery accuracy of FDIR, and runtime of signal reconstruction under small samples, a performance comparison among different methods is made. Results demonstrate that the proposed scheme provides a better solution to the status self-validation of a multifunctional self-validating sensor under both normal and abnormal situations. (paper)

  20. Time to decision: the drivers of innovation adoption decisions

    Science.gov (United States)

    Ciganek, Andrew Paul; (Dave) Haseman, William; Ramamurthy, K.

    2014-03-01

    Organisations desire timeliness. Timeliness facilitates a better responsiveness to changes in an organisation's external environment to either attain or maintain competitiveness. Despite its importance, decision timeliness has not been explicitly examined. Decision timeliness is measured in this study as the time taken to commit to a decision. The research objective is to identify the drivers of decision timeliness in the context of adopting service-oriented architecture (SOA), an innovation for enterprise computing. A research model rooted in the technology-organisation-environment (TOE) framework is proposed and tested with data collected in a large-scale study. The research variables have been examined before in the context of adoption, but their applicability to the timeliness of innovation decision-making has not received much attention and their salience is unclear. The results support multiple hypothesised relationships, including the finding that a risk-oriented organisational culture as well as normative and coercive pressures accelerates decision timeliness. Top management support as well as the traditional innovation attributes (compatibility, relative advantage and complexity/ease-of-use) were not found to be significant when examining their influence on decision timeliness, which appears inconsistent with generally accepted knowledge and deserves further examination.

  1. Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement.

    Science.gov (United States)

    McInnes, Matthew D F; Moher, David; Thombs, Brett D; McGrath, Trevor A; Bossuyt, Patrick M; Clifford, Tammy; Cohen, Jérémie F; Deeks, Jonathan J; Gatsonis, Constantine; Hooft, Lotty; Hunt, Harriet A; Hyde, Christopher J; Korevaar, Daniël A; Leeflang, Mariska M G; Macaskill, Petra; Reitsma, Johannes B; Rodin, Rachel; Rutjes, Anne W S; Salameh, Jean-Paul; Stevens, Adrienne; Takwoingi, Yemisi; Tonelli, Marcello; Weeks, Laura; Whiting, Penny; Willis, Brian H

    2018-01-23

    Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. The 27-item

  2. Accuracy limits on rapid assessment of gently varying bathymetry

    Science.gov (United States)

    McDonald, B. Edward; Holland, Charles

    2002-05-01

    Accuracy limits for rapidly probing shallow water bathymetry are investigated as a function of bottom slope and other relevant parameters. The probe scheme [B. E. McDonald and Charles Holland, J. Acoust. Soc. Am. 110, 2767 (2001)] uses a time reversed mirror (TRM) to ensonify a thin annulus on the ocean bottom at ranges of a few km from a vertical send/ receive array. The annulus is shifted in range by variable bathymetry (perturbation theory shows that the focal annulus experiences a radial shift proportional to the integrated bathymetry along a given azimuth). The range shift implies an azimuth-dependent time of maximum reverberation. Thus the reverberant return contains information that might be inverted to give bathymetric parameters. The parameter range over which the perturbation result is accurate is explored using the RAM code for propagation in arbitrarily range-dependent environments. [Work supported by NRL.

  3. Analisis Faktor Konfirmatori Kualitas Pelaporan Keuangan

    OpenAIRE

    Fanani, Zaenal

    2011-01-01

    The purpose of this study was to test whether the indicators of the quality of accounting-based financial reporting (accrual quality, persistence, prediktabilita, and income smoothing) and market-based (value relevance, timeliness, and conservatism) differ from one another and contribute to the formation of quality financial reporting. This study sample of manufacturing firms using data analysis techniques confirmatory factor analysis. The results of this study showed no overlap between the s...

  4. Test expectancy affects metacomprehension accuracy.

    Science.gov (United States)

    Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D

    2011-06-01

    Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.

  5. Test Expectancy Affects Metacomprehension Accuracy

    Science.gov (United States)

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  6. Application of all-relevant feature selection for the failure analysis of parameter-induced simulation crashes in climate models

    Science.gov (United States)

    Paja, Wiesław; Wrzesien, Mariusz; Niemiec, Rafał; Rudnicki, Witold R.

    2016-03-01

    Climate models are extremely complex pieces of software. They reflect the best knowledge on the physical components of the climate; nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a simulation crashing. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to the simulation crashing and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the data set used in this research using different methodology. We confirm the main conclusion of the original study concerning the suitability of machine learning for the prediction of crashes. We show that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three others are relevant but redundant and two are not relevant at all. We also show that the variance due to the split of data between training and validation sets has a large influence both on the accuracy of predictions and on the relative importance of variables; hence only a cross-validated approach can deliver a robust prediction of performance and relevance of variables.

  7. Parsimonious relevance models

    NARCIS (Netherlands)

    Meij, E.; Weerkamp, W.; Balog, K.; de Rijke, M.; Myang, S.-H.; Oard, D.W.; Sebastiani, F.; Chua, T.-S.; Leong, M.-K.

    2008-01-01

    We describe a method for applying parsimonious language models to re-estimate the term probabilities assigned by relevance models. We apply our method to six topic sets from test collections in five different genres. Our parsimonious relevance models (i) improve retrieval effectiveness in terms of

  8. Diagnostic accuracy in virtual dermatopathology

    DEFF Research Database (Denmark)

    Mooney, E.; Kempf, W.; Jemec, G.B.E.

    2012-01-01

    Background Virtual microscopy is used for teaching medical students and residents and for in-training and certification examinations in the United States. However, no existing studies compare diagnostic accuracy using virtual slides and photomicrographs. The objective of this study was to compare...... diagnostic accuracy of dermatopathologists and pathologists using photomicrographs vs. digitized images, through a self-assessment examination, and to elucidate assessment of virtual dermatopathology. Methods Forty-five dermatopathologists and pathologists received a randomized combination of 15 virtual...... slides and photomicrographs with corresponding clinical photographs and information in a self-assessment examination format. Descriptive data analysis and comparison of groups were performed using a chi-square test. Results Diagnostic accuracy in dermatopathology using virtual dermatopathology...

  9. TECHNIQUE OF CONSTRUCTION AND ANALYSIS OF GLONASS FIELDS OF ACCURACY IN THE GIVEN ZONE OF AIRSPACE

    Directory of Open Access Journals (Sweden)

    O. N. Skrypnik

    2015-01-01

    Full Text Available Based on the usage of LabView’s developed program of orbital motion modeling and the choice of satellite’s working constellation, the methodology of building-up the fields of potential accuracy GLONASS in the given airspace has been proposed. The methods are based on the estimation of horizontal (HDOP and vertical (VDOP geometric factors’ values in points chosen with given latitude and longitude discontinuity in the airspace which is being studied. By relevant error handling the areas where the values of HDOP and VDOP lay within given range and their cartographic matching are selected. Expressions for geometric factors calculation are listed. By comparing the data of real experiments with semireal-istic simulation which have been conducted with the aeronautical receiver CH-4312 and the simulator CH-3803M, the validity of math model and the results’ accuracy have been evaluated. Investigations of geometric factors’ change in the initial and finishing points of flight route and also during the flight Irkutsk-Moscow have been conducted. As an example the fields of accuracy GLONASS in horizontal and vertical surfaces for the airspace between Irkutsk and Moscow have been built for such points in time that match the aircraft’s take off in Irkutsk and its landing in Moscow.

  10. Improving Accuracy of Processing Through Active Control

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available An important task of modern mathematical statistics with its methods based on the theory of probability is a scientific estimate of measurement results. There are certain costs under control, and under ineffective control when a customer has got defective products these costs are significantly higher because of parts recall.When machining the parts, under the influence of errors a range scatter of part dimensions is offset towards the tolerance limit. To improve a processing accuracy and avoid defective products involves reducing components of error in machining, i.e. to improve the accuracy of machine and tool, tool life, rigidity of the system, accuracy of the adjustment. In a given time it is also necessary to adapt machine.To improve an accuracy and a machining rate there, currently  become extensively popular various the in-process gaging devices and controlled machining that uses adaptive control systems for the process monitoring. Improving the accuracy in this case is compensation of a majority of technological errors. The in-cycle measuring sensors (sensors of active control allow processing accuracy improvement by one or two quality and provide a capability for simultaneous operation of several machines.Efficient use of in-cycle measuring sensors requires development of methods to control the accuracy through providing the appropriate adjustments. Methods based on the moving average, appear to be the most promising for accuracy control since they include data on the change in some last measured values of the parameter under control.

  11. Dokumentation, Kalkulation und Prozessanalyse im DRG-Zeitalter

    OpenAIRE

    Ingenerf, J; Gerdsen, F; Seik, B; Pöppl, SJ; Schreiber, R; Heinemeier, AK; Köppe, K; Bruch, HP

    2005-01-01

    The introduction of the G-DRGs (German Diagnosis Related Groups) in the year 2002 and the gradual use as a reimbursement system beginning from 2005 forces hospitals to meet enormous challenges with respect to organizational and IT-issues. The quality of the basic data set in terms of case and data completeness, accuracy and timeliness has to be ensured because with that the DRG case group and hence, the associated revenue is determined. From the economic point of view the costs of providing t...

  12. An evaluation of safety-critical Java on a Java processor

    OpenAIRE

    Rios Rivas, Juan Ricardo; Schoeberl, Martin

    2014-01-01

    The safety-critical Java (SCJ) specification provides a restricted set of the Java language intended for applications that require certification. In order to test the specification, implementations are emerging and the need to evaluate those implementations in a systematic way is becoming important. In this paper we evaluate our SCJ implementation which is based on the Java Optimized Processor JOP and we measure different performance and timeliness criteria relevant to hard real-time systems....

  13. Accuracy and precision in thermoluminescence dosimetry

    International Nuclear Information System (INIS)

    Marshall, T.O.

    1984-01-01

    The question of accuracy and precision in thermoluminescent dosimetry, particularly in relation to lithium fluoride phosphor, is discussed. The more important sources of error, including those due to the detectors, the reader, annealing and dosemeter design, are identified and methods of reducing their effects on accuracy and precision to a minimum are given. Finally, the accuracy and precision achievable for three quite different applications are discussed, namely, for personal dosimetry, environmental monitoring and for the measurement of photon dose distributions in phantoms. (U.K.)

  14. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hang-cheong Wong

    2012-01-01

    Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

  15. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  16. Assessing the impact of the introduction of an electronic hospital discharge system on the completeness and timeliness of discharge communication: a before and after study.

    Science.gov (United States)

    Mehta, Rajnikant L; Baxendale, Bryn; Roth, Katie; Caswell, Victoria; Le Jeune, Ivan; Hawkins, Jack; Zedan, Haya; Avery, Anthony J

    2017-09-05

    Hospital discharge summaries are a key communication tool ensuring continuity of care between primary and secondary care. Incomplete or untimely communication of information increases risk of hospital readmission and associated complications. The aim of this study was to evaluate whether the introduction of a new electronic discharge system (NewEDS) was associated with improvements in the completeness and timeliness of discharge information, in Nottingham University Hospitals NHS Trust, England. A before and after longitudinal study design was used. Data were collected using the gold standard auditing tool from the Royal College of Physicians (RCP). This tool contains a checklist of 57 items grouped into seven categories, 28 of which are classified as mandatory by RCP. Percentage completeness (out of the 28 mandatory items) was considered to be the primary outcome measure. Data from 773 patients discharged directly from the acute medical unit over eight-week long time periods (four before and four after the change to the NewEDS) from August 2010 to May 2012 were extracted and evaluated. Results were summarised by effect size on completeness before and after changeover to NewEDS respectively. The primary outcome variable was represented with percentage of completeness score and a non-parametric technique was used to compare pre-NewEDS and post-NewEDS scores. The changeover to the NewEDS resulted in an increased completeness of discharge summaries from 60.7% to 75.0% (p communication.

  17. Get the Diagnosis: an evidence-based medicine collaborative Wiki for diagnostic test accuracy.

    Science.gov (United States)

    Hammer, Mark M; Kohlberg, Gavriel D

    2017-04-01

    Despite widespread calls for its use, there are challenges to the implementation of evidence-based medicine (EBM) in clinical practice. In response to the challenges of finding timely, pertinent information on diagnostic test accuracy, we developed an online, crowd-sourced Wiki on diagnostic test accuracy called Get the Diagnosis (GTD, http://www.getthediagnosis.org). Since its launch in November 2008 till October 2015, GTD has accumulated information on 300 diagnoses, with 1617 total diagnostic entries. There are a total of 1097 unique diagnostic tests with a mean of 5.4 tests (range 0-38) per diagnosis. 73% of entries (1182 of 1617) have an associated sensitivity and specificity and 89% of entries (1432 of 1617) have associated peer-reviewed literature citations. Altogether, GTD contains 474 unique literature citations. For a sample of three diagnoses, the search precision (percentage of relevant results in the first 30 entries) in GTD was 100% as compared with a range of 13.3%-63.3% for PubMed and between 6.7% and 76.7% for Google Scholar. GTD offers a fast, precise and efficient way to look up diagnostic test accuracy. On three selected examples, GTD had a greater precision rate compared with PubMed and Google Scholar in identifying diagnostic test information. GTD is a free resource that complements other currently available resources. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Accuracy and Precision of Noninvasive Blood Pressure in Normo-, Hyper-, and Hypotensive Standing and Anesthetized Adult Horses.

    Science.gov (United States)

    Heliczer, N; Lorello, O; Casoni, D; Navas de Solis, C

    2016-05-01

    Blood pressure is relevant to the diagnosis and management of many medical, cardiovascular and critical diseases. The accuracy of many commonly used noninvasive blood pressure (NIBP) monitors and the accuracy of NIBP measurements in hypo- and hypertensive standing horses has not been determined. The objective of this study was to investigate the accuracy of an oscillometric BP monitor in standing horses before and during pharmacologically induced hyper- and hypotension and to compare results in standing and anesthetized horses. Eight standing mares from a research herd (SG) and eight anesthetized horses from a hospital population (AG). Prospective experimental and observational studies. Invasive blood pressure (IBP) and NIBP, corrected to heart level, were measured simultaneously. In the SG hyper- and hypotension were induced by administration of phenylephrine (3 μg/kg/min IV for 15 minutes) and acepromazine (0.05 mg/kg IV), respectively. In the AG NIBP and IBP were recorded during regular hospital procedures. There was a significant correlation between mean NIBP and IBP in standing (R = 0.88, P horses (R = 0.81, P horses, but in the SG significant correlation between NIBP and IBP was only detected for the normotensive phase. While the evaluated oscillometric BP device allowed estimation of BP and adequately differentiated marked trends, the accuracy and precision were low in standing horses. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  19. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  20. You are so beautiful... to me: seeing beyond biases and achieving accuracy in romantic relationships.

    Science.gov (United States)

    Solomon, Brittany C; Vazire, Simine

    2014-09-01

    Do romantic partners see each other realistically, or do they have overly positive perceptions of each other? Research has shown that realism and positivity co-exist in romantic partners' perceptions (Boyes & Fletcher, 2007). The current study takes a novel approach to explaining this seemingly paradoxical effect when it comes to physical attractiveness--a highly evaluative trait that is especially relevant to romantic relationships. Specifically, we argue that people are aware that others do not see their partners as positively as they do. Using both mean differences and correlational approaches, we test the hypothesis that despite their own biased and idiosyncratic perceptions, people have 2 types of partner-knowledge: insight into how their partners see themselves (i.e., identity accuracy) and insight into how others see their partners (i.e., reputation accuracy). Our results suggest that romantic partners have some awareness of each other's identity and reputation for physical attractiveness, supporting theories that couple members' perceptions are driven by motives to fulfill both esteem- and epistemic-related needs (i.e., to see their partners positively and realistically). 2014 APA, all rights reserved

  1. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    Science.gov (United States)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  2. Accuracy Assessment of Different Digital Surface Models

    Directory of Open Access Journals (Sweden)

    Ugur Alganci

    2018-03-01

    Full Text Available Digital elevation models (DEMs, which can occur in the form of digital surface models (DSMs or digital terrain models (DTMs, are widely used as important geospatial information sources for various remote sensing applications, including the precise orthorectification of high-resolution satellite images, 3D spatial analyses, multi-criteria decision support systems, and deformation monitoring. The accuracy of DEMs has direct impacts on specific calculations and process chains; therefore, it is important to select the most appropriate DEM by considering the aim, accuracy requirement, and scale of each study. In this research, DSMs obtained from a variety of satellite sensors were compared to analyze their accuracy and performance. For this purpose, freely available Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER 30 m, Shuttle Radar Topography Mission (SRTM 30 m, and Advanced Land Observing Satellite (ALOS 30 m resolution DSM data were obtained. Additionally, 3 m and 1 m resolution DSMs were produced from tri-stereo images from the SPOT 6 and Pleiades high-resolution (PHR 1A satellites, respectively. Elevation reference data provided by the General Command of Mapping, the national mapping agency of Turkey—produced from 30 cm spatial resolution stereo aerial photos, with a 5 m grid spacing and ±3 m or better overall vertical accuracy at the 90% confidence interval (CI—were used to perform accuracy assessments. Gross errors and water surfaces were removed from the reference DSM. The relative accuracies of the different DSMs were tested using a different number of checkpoints determined by different methods. In the first method, 25 checkpoints were selected from bare lands to evaluate the accuracies of the DSMs on terrain surfaces. In the second method, 1000 randomly selected checkpoints were used to evaluate the methods’ accuracies for the whole study area. In addition to the control point approach, vertical cross

  3. Evaluation of the generality and accuracy of a new mesh morphing procedure for the human femur.

    Science.gov (United States)

    Grassi, Lorenzo; Hraiech, Najah; Schileo, Enrico; Ansaloni, Mauro; Rochette, Michel; Viceconti, Marco

    2011-01-01

    Various papers described mesh morphing techniques for computational biomechanics, but none of them provided a quantitative assessment of generality, robustness, automation, and accuracy in predicting strains. This study aims to quantitatively evaluate the performance of a novel mesh-morphing algorithm. A mesh-morphing algorithm based on radial-basis functions and on manual selection of corresponding landmarks on template and target was developed. The periosteal geometries of 100 femurs were derived from a computed tomography scan database and used to test the algorithm generality in producing finite element (FE) morphed meshes. A published benchmark, consisting of eight femurs for which in vitro strain measurements and standard FE model strain prediction accuracy were available, was used to assess the accuracy of morphed FE models in predicting strains. Relevant parameters were identified to test the algorithm robustness to operative conditions. Time and effort needed were evaluated to define the algorithm degree of automation. Morphing was successful for 95% of the specimens, with mesh quality indicators comparable to those of standard FE meshes. Accuracy of the morphed meshes in predicting strains was good (R(2)>0.9, RMSE%0.05) and partially to the number of landmark used. Producing a morphed mesh starting from the triangularized geometry of the specimen requires on average 10 min. The proposed method is general, robust, automated, and accurate enough to be used in bone FE modelling from diagnostic data, and prospectively in applications such as statistical shape modelling. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Improving coding accuracy in an academic practice.

    Science.gov (United States)

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  5. Thermodynamics of accuracy in kinetic proofreading: dissipation and efficiency trade-offs

    International Nuclear Information System (INIS)

    Rao, Riccardo; Peliti, Luca

    2015-01-01

    The high accuracy exhibited by biological information transcription processes is due to kinetic proofreading, i.e. by a mechanism which reduces the error rate of the information-handling process by driving it out of equilibrium. We provide a consistent thermodynamic description of enzyme-assisted assembly processes involving competing substrates, in a master equation framework. We introduce and evaluate a measure of the efficiency based on rigorous non-equilibrium inequalities. The performance of several proofreading models are thus analyzed and the related time, dissipation and efficiency versus error trade-offs exhibited for different discrimination regimes. We finally introduce and analyze in the same framework a simple model which takes into account correlations between consecutive enzyme-assisted assembly steps. This work highlights the relevance of the distinction between energetic and kinetic discrimination regimes in enzyme-substrate interactions. (paper)

  6. A preliminary analysis of human factors affecting the recognition accuracy of a discrete word recognizer for C3 systems

    Science.gov (United States)

    Yellen, H. W.

    1983-03-01

    Literature pertaining to Voice Recognition abounds with information relevant to the assessment of transitory speech recognition devices. In the past, engineering requirements have dictated the path this technology followed. But, other factors do exist that influence recognition accuracy. This thesis explores the impact of Human Factors on the successful recognition of speech, principally addressing the differences or variability among users. A Threshold Technology T-600 was used for a 100 utterance vocubalary to test 44 subjects. A statistical analysis was conducted on 5 generic categories of Human Factors: Occupational, Operational, Psychological, Physiological and Personal. How the equipment is trained and the experience level of the speaker were found to be key characteristics influencing recognition accuracy. To a lesser extent computer experience, time or week, accent, vital capacity and rate of air flow, speaker cooperativeness and anxiety were found to affect overall error rates.

  7. Channelized relevance vector machine as a numerical observer for cardiac perfusion defect detection task

    Science.gov (United States)

    Kalayeh, Mahdi M.; Marin, Thibault; Pretorius, P. Hendrik; Wernick, Miles N.; Yang, Yongyi; Brankov, Jovan G.

    2011-03-01

    In this paper, we present a numerical observer for image quality assessment, aiming to predict human observer accuracy in a cardiac perfusion defect detection task for single-photon emission computed tomography (SPECT). In medical imaging, image quality should be assessed by evaluating the human observer accuracy for a specific diagnostic task. This approach is known as task-based assessment. Such evaluations are important for optimizing and testing imaging devices and algorithms. Unfortunately, human observer studies with expert readers are costly and time-demanding. To address this problem, numerical observers have been developed as a surrogate for human readers to predict human diagnostic performance. The channelized Hotelling observer (CHO) with internal noise model has been found to predict human performance well in some situations, but does not always generalize well to unseen data. We have argued in the past that finding a model to predict human observers could be viewed as a machine learning problem. Following this approach, in this paper we propose a channelized relevance vector machine (CRVM) to predict human diagnostic scores in a detection task. We have previously used channelized support vector machines (CSVM) to predict human scores and have shown that this approach offers better and more robust predictions than the classical CHO method. The comparison of the proposed CRVM with our previously introduced CSVM method suggests that CRVM can achieve similar generalization accuracy, while dramatically reducing model complexity and computation time.

  8. Extending the Matrix Element Method beyond the Born approximation: calculating event weights at next-to-leading order accuracy

    International Nuclear Information System (INIS)

    Martini, Till; Uwer, Peter

    2015-01-01

    In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e"+e"− annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.

  9. Multiple sequence alignment accuracy and phylogenetic inference.

    Science.gov (United States)

    Ogden, T Heath; Rosenberg, Michael S

    2006-04-01

    Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction.

  10. Accuracy Limitations in Optical Linear Algebra Processors

    Science.gov (United States)

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.

  11. Accuracy of simple plain radiographic signs and measures to diagnose acute scapholunate ligament injuries of the wrist

    Energy Technology Data Exchange (ETDEWEB)

    Dornberger, Jenny E. [Unfallkrankenhaus Berlin, Department of Plastic Surgery and Burn Care, Berlin (Germany); Rademacher, Grit; Mutze, Sven [Unfallkrankenhaus Berlin, Institute of Radiology, Berlin (Germany); Eisenschenk, Andreas [Unfallkrankenhaus Berlin, Department of Hand-, Replantation- and Microsurgery, Berlin (Germany); University Medicine Greifswald, Department of Hand Surgery and Microsurgery, Greifswald (Germany); Stengel, Dirk [Unfallkrankenhaus Berlin, Centre for Clinical Research, Berlin (Germany); Charite Medical University Centre, Julius Wolff Institute, Centre for Musculoskeletal Surgery, Berlin (Germany)

    2015-12-15

    To determine the accuracy of common radiological indices for diagnosing ruptures of the scapholunate (SL) ligament, the most relevant soft tissue injury of the wrist. This was a prospective diagnostic accuracy study with independent verification of index test findings by a reference standard (wrist arthroscopy). Bilateral digital radiographs in posteroanterior (pa), lateral and Stecher's projection were evaluated by two independent expert readers. Diagnostic accuracy of radiological signs was expressed as sensitivity, specificity, positive (PPV) and negative (NPV) predictive values with 95 % confidence intervals (CI). The prevalence of significant acute SL tears (grade ≥ III according to Geissler's classification) was 27/72 (38 %, 95 % CI 26-50 %). The SL distance on Stecher's projection proved the most accurate index to rule the presence of an SL rupture in and out. SL distance on plain pa radiographs, Stecher's projection and the radiolunate angle contributed independently to the final diagnostic model. These three simple indices explained 97 % of the diagnostic variance. In the era of computed tomography and magnetic resonance imaging, plain radiographs remain a highly sensitive and specific primary tool to triage patients with a suspected SL tear to further diagnostic work-up and surgical care. (orig.)

  12. Quantitative accuracy assessment of thermalhydraulic code predictions with SARBM

    International Nuclear Information System (INIS)

    Prosek, A.

    2001-01-01

    In recent years, the nuclear reactor industry has focused significant attention on nuclear reactor systems code accuracy and uncertainty issues. A few methods suitable to quantify code accuracy of thermalhydraulic code calculations were proposed and applied in the past. In this study a Stochastic Approximation Ratio Based Method (SARBM) was adapted and proposed for accuracy quantification. The objective of the study was to qualify the SARBM. The study compare the accuracy obtained by SARBM with the results obtained by widely used Fast Fourier Transform Based Method (FFTBM). The methods were applied to RELAP5/MOD3.2 code calculations of various BETHSY experiments. The obtained results showed that the SARBM was able to satisfactorily predict the accuracy of the calculated trends when visually comparing plots and comparing the results with the qualified FFTBM. The analysis also showed that the new figure-of-merit called accuracy factor (AF) is more convenient than stochastic approximation ratio for combining single variable accuracy's into total accuracy. The accuracy results obtained for the selected tests suggest that the acceptability factors for the SAR method were reasonably defined. The results also indicate that AF is a useful quantitative measure of accuracy.(author)

  13. Forecast Accuracy Uncertainty and Momentum

    OpenAIRE

    Bing Han; Dong Hong; Mitch Warachka

    2009-01-01

    We demonstrate that stock price momentum and earnings momentum can result from uncertainty surrounding the accuracy of cash flow forecasts. Our model has multiple information sources issuing cash flow forecasts for a stock. The investor combines these forecasts into an aggregate cash flow estimate that has minimal mean-squared forecast error. This aggregate estimate weights each cash flow forecast by the estimated accuracy of its issuer, which is obtained from their past forecast errors. Mome...

  14. Classification Accuracy Increase Using Multisensor Data Fusion

    Science.gov (United States)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  15. Accuracy of the 14 C-urea breath test for the diagnosis of Helicobacter pylori

    International Nuclear Information System (INIS)

    Gomes, Ana Thereza Britto; Secaf, Marie; Modena, Jose Luiz Pimenta; Troncon, Luiz Ernesto de Almeida; Oliveira, Ricardo Brandt de

    2002-01-01

    The development of simple, accurate and low-expense techniques for detection of Helicobacter pylori infection has great relevance. The objective was to determine the accuracy of a rapid 14 C-urea breath test (UBT) employing a very simple device for breathed air collection. One hundred and thirty-seven adult patients who underwent upper gastrointestinal endoscopy in the Clinical Hospital. The main measurements were histology for Helicobacter pylori (HP); urease test; urea breath test (UBT). One hundred and fifteen patients were infected by HP (HP +) according to both histology and the urease test, and 22 patients were HP-negative (HP-), according to the same two tests. UBT was capable of discriminating between HP + and HP- in a way that was similar to the combination of urease test and histology. When this combination of results is taken as the 'gold standard' for HP infection, the sensitivity and specificity of UBT are both greater than 90% for a range of cut-off points and breathed air collection times. It was concluded that the rapid UBT employing a simple device for air collection has a high accuracy in determining HP infection. (author)

  16. Diagnostic accuracy and measurement sensitivity of digital models for orthodontic purposes: A systematic review.

    Science.gov (United States)

    Rossini, Gabriele; Parrini, Simone; Castroflorio, Tommaso; Deregibus, Andrea; Debernardi, Cesare L

    2016-02-01

    Our objective was to assess the accuracy, validity, and reliability of measurements obtained from virtual dental study models compared with those obtained from plaster models. PubMed, PubMed Central, National Library of Medicine Medline, Embase, Cochrane Central Register of Controlled Clinical trials, Web of Knowledge, Scopus, Google Scholar, and LILACs were searched from January 2000 to November 2014. A grading system described by the Swedish Council on Technology Assessment in Health Care and the Cochrane tool for risk of bias assessment were used to rate the methodologic quality of the articles. Thirty-five relevant articles were selected. The methodologic quality was high. No significant differences were observed for most of the studies in all the measured parameters, with the exception of the American Board of Orthodontics Objective Grading System. Digital models are as reliable as traditional plaster models, with high accuracy, reliability, and reproducibility. Landmark identification, rather than the measuring device or the software, appears to be the greatest limitation. Furthermore, with their advantages in terms of cost, time, and space required, digital models could be considered the new gold standard in current practice. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  17. Early detection of tuberculosis outbreaks among the San Francisco homeless: trade-offs between spatial resolution and temporal scale.

    Directory of Open Access Journals (Sweden)

    Brandon W Higgs

    Full Text Available BACKGROUND: San Francisco has the highest rate of tuberculosis (TB in the U.S. with recurrent outbreaks among the homeless and marginally housed. It has been shown for syndromic data that when exact geographic coordinates of individual patients are used as the spatial base for outbreak detection, higher detection rates and accuracy are achieved compared to when data are aggregated into administrative regions such as zip codes and census tracts. We examine the effect of varying the spatial resolution in the TB data within the San Francisco homeless population on detection sensitivity, timeliness, and the amount of historical data needed to achieve better performance measures. METHODS AND FINDINGS: We apply a variation of space-time permutation scan statistic to the TB data in which a patient's location is either represented by its exact coordinates or by the centroid of its census tract. We show that the detection sensitivity and timeliness of the method generally improve when exact locations are used to identify real TB outbreaks. When outbreaks are simulated, while the detection timeliness is consistently improved when exact coordinates are used, the detection sensitivity varies depending on the size of the spatial scanning window and the number of tracts in which cases are simulated. Finally, we show that when exact locations are used, smaller amount of historical data is required for training the model. CONCLUSION: Systematic characterization of the spatio-temporal distribution of TB cases can widely benefit real time surveillance and guide public health investigations of TB outbreaks as to what level of spatial resolution results in improved detection sensitivity and timeliness. Trading higher spatial resolution for better performance is ultimately a tradeoff between maintaining patient confidentiality and improving public health when sharing data. Understanding such tradeoffs is critical to managing the complex interplay between public

  18. Computerization of the safeguards analysis decision process

    International Nuclear Information System (INIS)

    Ehinger, M.H.

    1990-01-01

    This paper reports that safeguards regulations are evolving to meet new demands for timeliness and sensitivity in detecting the loss or unauthorized use of sensitive nuclear materials. The opportunities to meet new rules, particularly in bulk processing plants, involve developing techniques which use modern, computerized process control and information systems. Using these computerized systems in the safeguards analysis involves all the challenges of the man-machine interface experienced in the typical process control application and adds new dimensions to accuracy requirements, data analysis, and alarm resolution in the regulatory environment

  19. Research of preferences of consumers of household filters for water purification by the fokus-grupp method

    OpenAIRE

    Medvedeva, E.; Blyumina, A.; Piskunov, V.

    2013-01-01

    Availability of qualitative water - the minimum guarantee of health of the person water or to use it only for cleaning and ware washing. The growing demand and change of consumer preferences causes relevance and timeliness of the organization and carrying out the research "Consumer Behaviour in the Market of Household Filters for Water Purification". As the main instrument of obtaining information the method of focus groups was chosen. In article criteria of a consumer choice are defined, to ...

  20. "What is relevant in a text document?": An interpretable machine learning approach.

    Directory of Open Access Journals (Sweden)

    Leila Arras

    Full Text Available Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text's category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP, a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.

  1. Accuracies Of Optical Processors For Adaptive Optics

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1992-01-01

    Paper presents analysis of accuracies and requirements concerning accuracies of optical linear-algebra processors (OLAP's) in adaptive-optics imaging systems. Much faster than digital electronic processor and eliminate some residual distortion. Question whether errors introduced by analog processing of OLAP overcome advantage of greater speed. Paper addresses issue by presenting estimate of accuracy required in general OLAP that yields smaller average residual aberration of wave front than digital electronic processor computing at given speed.

  2. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    Science.gov (United States)

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  3. Differential impact of relevant and irrelevant dimension primes on rule-based and information-integration category learning.

    Science.gov (United States)

    Grimm, Lisa R; Maddox, W Todd

    2013-11-01

    Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning. © 2013.

  4. Investigating the recording and accuracy of fluid balance monitoring in critically ill patients

    Directory of Open Access Journals (Sweden)

    Annette Diacon

    2014-11-01

    Full Text Available Background. The accurate assessment of fluid balance data collected during physical assessment as well as during monitoring and record-keeping forms an essential part of the baseline patient information that guides medical and nursing interventions aimed at achieving physiological stability in patients. An informal audit of 24-hour fluid balance records in a local intensive care unit (ICU showed that seven out of ten fluid balance calculations were incorrect.Objective. To identify and describe current clinical nursing practice in fluid balance monitoring and measurement accuracy in ICUs, conducted as part of a broader study in partial fulfilment of a Master of Nursing degree.Methods. A quantitative approach utilising a descriptive, exploratory study design was applied. An audit of 103 ICU records was conducted to establish the current practices and accuracy in recording of fluid balance monitoring. Data were collected using a purpose-designed tool based on relevant literature and practice experience. Results. Of the original recorded fluid balance calculations, 79% deviated by more than 50 mL from the audited calculations. Further­more, a significant relationship was shown between inaccurate fluid balance calculation and administration of diuretics (p=0.01. Conclusion. The majority of fluid balance records were incorrectly calculated.

  5. Accuracy and precision of glucose monitoring are relevant to treatment decision-making and clinical outcome in hospitalized patients with diabetes.

    Science.gov (United States)

    Voulgari, Christina; Tentolouris, Nicholas

    2011-07-01

    The accuracy and precision of three blood glucose meters (BGMs) were evaluated in 600 hospitalized patients with type 1 (n = 200) or type 2 (n = 400) diabetes. Capillary blood glucose values were analyzed with Accu-Chek(®) Aviva [Roche (Hellas) S.A., Maroussi, Greece], Precision-Xceed(®) [Abbott Laboratories (Hellas) S.A., Alimos, Greece], and Glucocard X-Sensor(®) (Menarini Diagnostics S.A., Argyroupolis, Greece). At the same time plasma glucose was analyzed using the World Health Organization's glucose oxidase method. Median plasma glucose values (141.2 [range, 13-553] mg/dL) were significantly different from that produced by the BGMs (P diabetes patients. In all cases, the BGMs were unreliable in sensing hypoglycemia. Multivariate linear regression analysis demonstrated that low blood pressure and hematocrit significantly affected glucose measurements obtained with all three BGMs (P diabetes patients, all three frequently used BGMs undersensed hypoglycemia and oversensed hyperglycemia to some extent. Patients and caregivers should be aware of these restrictions of the BGMs.

  6. Making Deferred Taxes Relevant

    NARCIS (Netherlands)

    Brouwer, Arjan; Naarding, Ewout

    2018-01-01

    We analyse the conceptual problems in current accounting for deferred taxes and provide solutions derived from the literature in order to make International Financial Reporting Standards (IFRS) deferred tax numbers value-relevant. In our view, the empirical results concerning the value relevance of

  7. Surgical accuracy of three-dimensional virtual planning

    DEFF Research Database (Denmark)

    Stokbro, Kasper; Aagaard, Esben; Torkov, Peter

    2016-01-01

    This retrospective study evaluated the precision and positional accuracy of different orthognathic procedures following virtual surgical planning in 30 patients. To date, no studies of three-dimensional virtual surgical planning have evaluated the influence of segmentation on positional accuracy...... and transverse expansion. Furthermore, only a few have evaluated the precision and accuracy of genioplasty in placement of the chin segment. The virtual surgical plan was compared with the postsurgical outcome by using three linear and three rotational measurements. The influence of maxillary segmentation...

  8. Extending the Matrix Element Method beyond the Born approximation: calculating event weights at next-to-leading order accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Martini, Till; Uwer, Peter [Humboldt-Universität zu Berlin, Institut für Physik,Newtonstraße 15, 12489 Berlin (Germany)

    2015-09-14

    In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e{sup +}e{sup −} annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.

  9. FIELD ACCURACY TEST OF RPAS PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    P. Barry

    2013-08-01

    Full Text Available Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS. We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This

  10. An Enhanced Text-Mining Framework for Extracting Disaster Relevant Data through Social Media and Remote Sensing Data Fusion

    Science.gov (United States)

    Scheele, C. J.; Huang, Q.

    2016-12-01

    In the past decade, the rise in social media has led to the development of a vast number of social media services and applications. Disaster management represents one of such applications leveraging massive data generated for event detection, response, and recovery. In order to find disaster relevant social media data, current approaches utilize natural language processing (NLP) methods based on keywords, or machine learning algorithms relying on text only. However, these approaches cannot be perfectly accurate due to the variability and uncertainty in language used on social media. To improve current methods, the enhanced text-mining framework is proposed to incorporate location information from social media and authoritative remote sensing datasets for detecting disaster relevant social media posts, which are determined by assessing the textual content using common text mining methods and how the post relates spatiotemporally to the disaster event. To assess the framework, geo-tagged Tweets were collected for three different spatial and temporal disaster events: hurricane, flood, and tornado. Remote sensing data and products for each event were then collected using RealEarthTM. Both Naive Bayes and Logistic Regression classifiers were used to compare the accuracy within the enhanced text-mining framework. Finally, the accuracies from the enhanced text-mining framework were compared to the current text-only methods for each of the case study disaster events. The results from this study address the need for more authoritative data when using social media in disaster management applications.

  11. Impact of investigations in general practice on timeliness of referral for patients subsequently diagnosed with cancer: analysis of national primary care audit data.

    Science.gov (United States)

    Rubin, G P; Saunders, C L; Abel, G A; McPhail, S; Lyratzopoulos, G; Neal, R D

    2015-02-17

    For patients with symptoms of possible cancer who do not fulfil the criteria for urgent referral, initial investigation in primary care has been advocated in the United Kingdom and supported by additional resources. The consequence of this strategy for the timeliness of diagnosis is unknown. We analysed data from the English National Audit of Cancer Diagnosis in Primary Care on patients with lung (1494), colorectal (2111), stomach (246), oesophagus (513), pancreas (327), and ovarian (345) cancer relating to the ordering of investigations by the General Practitioner and their nature. Presenting symptoms were categorised according to National Institute for Health and Care Excellence (NICE) guidance on referral for suspected cancer. We used linear regression to estimate the mean difference in primary-care interval by cancer, after adjustment for age, gender, and the symptomatic presentation category. Primary-care investigations were undertaken in 3198/5036 (64%) of cases. The median primary-care interval was 16 days (IQR 5-45) for patients undergoing investigation and 0 days (IQR 0-10) for those not investigated. Among patients whose symptoms mandated urgent referral to secondary care according to NICE guidelines, between 37% (oesophagus) and 75% (pancreas) were first investigated in primary care. In multivariable linear regression analyses stratified by cancer site, adjustment for age, sex, and NICE referral category explained little of the observed prolongation associated with investigation. For six specified cancers, investigation in primary care was associated with later referral for specialist assessment. This effect was independent of the nature of symptoms. Some patients for whom urgent referral is mandated by NICE guidance are nevertheless investigated before referral. Reducing the intervals between test order, test performance, and reporting can help reduce the prolongation of primary-care intervals associated with investigation use. Alternative models of

  12. Accuracy Assessment and Analysis for GPT2

    Directory of Open Access Journals (Sweden)

    YAO Yibin

    2015-07-01

    Full Text Available GPT(global pressure and temperature is a global empirical model usually used to provide temperature and pressure for the determination of tropospheric delay, there are some weakness to GPT, these have been improved with a new empirical model named GPT2, which not only improves the accuracy of temperature and pressure, but also provides specific humidity, water vapor pressure, mapping function coefficients and other tropospheric parameters, and no accuracy analysis of GPT2 has been made until now. In this paper high-precision meteorological data from ECWMF and NOAA were used to test and analyze the accuracy of temperature, pressure and water vapor pressure expressed by GPT2, testing results show that the mean Bias of temperature is -0.59℃, average RMS is 3.82℃; absolute value of average Bias of pressure and water vapor pressure are less than 1 mb, GPT2 pressure has average RMS of 7 mb, and water vapor pressure no more than 3 mb, accuracy is different in different latitudes, all of them have obvious seasonality. In conclusion, GPT2 model has high accuracy and stability on global scale.

  13. Accuracy in Optical Information Processing

    Science.gov (United States)

    Timucin, Dogan Aslan

    Low computational accuracy is an important obstacle for optical processors which blocks their way to becoming a practical reality and a serious challenger for classical computing paradigms. This research presents a comprehensive solution approach to the problem of accuracy enhancement in discrete analog optical information processing systems. Statistical analysis of a generic three-plane optical processor is carried out first, taking into account the effects of diffraction, interchannel crosstalk, and background radiation. Noise sources included in the analysis are photon, excitation, and emission fluctuations in the source array, transmission and polarization fluctuations in the modulator, and photoelectron, gain, dark, shot, and thermal noise in the detector array. Means and mutual coherence and probability density functions are derived for both optical and electrical output signals. Next, statistical models for a number of popular optoelectronic devices are studied. Specific devices considered here are light-emitting and laser diode sources, an ideal noiseless modulator and a Gaussian random-amplitude-transmittance modulator, p-i-n and avalanche photodiode detectors followed by electronic postprocessing, and ideal free-space geometrical -optics propagation and single-lens imaging systems. Output signal statistics are determined for various interesting device combinations by inserting these models into the general formalism. Finally, based on these special-case output statistics, results on accuracy limitations and enhancement in optical processors are presented. Here, starting with the formulation of the accuracy enhancement problem as (1) an optimal detection problem and (2) as a parameter estimation problem, the potential accuracy improvements achievable via the classical multiple-hypothesis -testing and maximum likelihood and Bayesian parameter estimation methods are demonstrated. Merits of using proper normalizing transforms which can potentially stabilize

  14. Diagnostic accuracy of atypical p-ANCA in autoimmune hepatitis using ROC- and multivariate regression analysis.

    Science.gov (United States)

    Terjung, B; Bogsch, F; Klein, R; Söhne, J; Reichel, C; Wasmuth, J-C; Beuers, U; Sauerbruch, T; Spengler, U

    2004-09-29

    Antineutrophil cytoplasmic antibodies (atypical p-ANCA) are detected at high prevalence in sera from patients with autoimmune hepatitis (AIH), but their diagnostic relevance for AIH has not been systematically evaluated so far. Here, we studied sera from 357 patients with autoimmune (autoimmune hepatitis n=175, primary sclerosing cholangitis (PSC) n=35, primary biliary cirrhosis n=45), non-autoimmune chronic liver disease (alcoholic liver cirrhosis n=62; chronic hepatitis C virus infection (HCV) n=21) or healthy controls (n=19) for the presence of various non-organ specific autoantibodies. Atypical p-ANCA, antinuclear antibodies (ANA), antibodies against smooth muscles (SMA), antibodies against liver/kidney microsomes (anti-Lkm1) and antimitochondrial antibodies (AMA) were detected by indirect immunofluorescence microscopy, antibodies against the M2 antigen (anti-M2), antibodies against soluble liver antigen (anti-SLA/LP) and anti-Lkm1 by using enzyme linked immunosorbent assays. To define the diagnostic precision of the autoantibodies, results of autoantibody testing were analyzed by receiver operating characteristics (ROC) and forward conditional logistic regression analysis. Atypical p-ANCA were detected at high prevalence in sera from patients with AIH (81%) and PSC (94%). ROC- and logistic regression analysis revealed atypical p-ANCA and SMA, but not ANA as significant diagnostic seromarkers for AIH (atypical p-ANCA: AUC 0.754+/-0.026, odds ratio [OR] 3.4; SMA: 0.652+/-0.028, OR 4.1). Atypical p-ANCA also emerged as the only diagnostically relevant seromarker for PSC (AUC 0.690+/-0.04, OR 3.4). None of the tested antibodies yielded a significant diagnostic accuracy for patients with alcoholic liver cirrhosis, HCV or healthy controls. Atypical p-ANCA along with SMA represent a seromarker with high diagnostic accuracy for AIH and should be explicitly considered in a revised version of the diagnostic score for AIH.

  15. HOW GOOD ARE FUTURE LAWYERS IN JUDGING THE ACCURACY OF REMINISCENT DETAILS? THE ESTIMATION-OBSERVATION GAP IN REAL EYEWITNESS ACCOUNTS

    Directory of Open Access Journals (Sweden)

    Aileen Oeberst

    2015-07-01

    Full Text Available Research has shown a discrepancy between estimated and actually observed accuracy of reminiscent details in eyewitness accounts. This estimation-observation gap is of particular relevance with regard to the evaluation of eyewitnesses’ accounts in the legal context. To date it has only been demonstrated in non-naturalistic settings, however. In addition, it is not known whether this gap extends to other tasks routinely employed in real-world trials, for instance person-identification tasks. In this study, law students witnessed a staged event and were asked to either recall the event and perform a person identification task or estimate the accuracy of the others’ performance. Additionally, external estimations were obtained from students who had not witnessed the event, but received a written summary instead. The estimation-observation gap was replicated for reminiscent details under naturalistic encoding conditions. This gap was more pronounced when compared to forgotten details, but not significantly so when compared to consistent details. In contrast, accuracy on the person-identification task was not consistently underestimated. The results are discussed in light of their implications for real-world trials and future research.

  16. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  17. New approaches to ranking countries for the allocation of development assistance for health: choices, indicators and implications

    Science.gov (United States)

    Ottersen, Trygve; Grépin, Karen A; Henderson, Klara; Pinkstaff, Crossley Beth; Norheim, Ole Frithjof; Røttingen, John-Arne

    2018-01-01

    Abstract The distributions of income and health within and across countries are changing. This challenges the way donors allocate development assistance for health (DAH) and particularly the role of gross national income per capita (GNIpc) in classifying countries to determine whether countries are eligible to receive assistance and how much they receive. Informed by a literature review and stakeholder consultations and interviews, we developed a stepwise approach to the design and assessment of country classification frameworks for the allocation of DAH, with emphasis on critical value choices. We devised 25 frameworks, all which combined GNIpc and at least one other indicator into an index. Indicators were selected and assessed based on relevance, salience, validity, consistency, and availability and timeliness, where relevance concerned the extent to which the indicator represented country’s health needs, domestic capacity, the expected impact of DAH, or equity. We assessed how the use of the different frameworks changed the rankings of low- and middle-income countries relative to a country’s ranking based on GNIpc alone. We found that stakeholders generally considered needs to be the most important concern to be captured by classification frameworks, followed by inequality, expected impact and domestic capacity. We further found that integrating a health-needs indicator with GNIpc makes a significant difference for many countries and country categories—and especially middle-income countries with high burden of unmet health needs—while the choice of specific indicator makes less difference. This together with assessments of relevance, salience, validity, consistency, and availability and timeliness suggest that donors have reasons to include a health-needs indicator in the initial classification of countries. It specifically suggests that life expectancy and disability-adjusted life year rate are indicators worth considering. Indicators related to other

  18. Bias associated with delayed verification in test accuracy studies: accuracy of tests for endometrial hyperplasia may be much higher than we think!

    Directory of Open Access Journals (Sweden)

    Coomarasamy Aravinthan

    2004-05-01

    Full Text Available Abstract Background To empirically evaluate bias in estimation of accuracy associated with delay in verification of diagnosis among studies evaluating tests for predicting endometrial hyperplasia. Methods Systematic reviews of all published research on accuracy of miniature endometrial biopsy and endometr ial ultrasonography for diagnosing endometrial hyperplasia identified 27 test accuracy studies (2,982 subjects. Of these, 16 had immediate histological verification of diagnosis while 11 had verification delayed > 24 hrs after testing. The effect of delay in verification of diagnosis on estimates of accuracy was evaluated using meta-regression with diagnostic odds ratio (dOR as the accuracy measure. This analysis was adjusted for study quality and type of test (miniature endometrial biopsy or endometrial ultrasound. Results Compared to studies with immediate verification of diagnosis (dOR 67.2, 95% CI 21.7–208.8, those with delayed verification (dOR 16.2, 95% CI 8.6–30.5 underestimated the diagnostic accuracy by 74% (95% CI 7%–99%; P value = 0.048. Conclusion Among studies of miniature endometrial biopsy and endometrial ultrasound, diagnostic accuracy is considerably underestimated if there is a delay in histological verification of diagnosis.

  19. The relevance of "non-relevant metabolites" from plant protection products (PPPs) for drinking water: the German view.

    Science.gov (United States)

    Dieter, Hermann H

    2010-03-01

    "Non-relevant metabolites" are those degradation products of plant protection products (PPPs), which are devoid of the targeted toxicities of the PPP and devoid of genotoxicity. Most often, "non-relevant metabolites" have a high affinity to the aquatic environment, are very mobile within this environment, and, usually, are also persistent. Therefore, from the point of drinking water hygiene, they must be characterized as "relevant for drinking water" like many other hydrophilic/polar environmental contaminants of different origins. "Non-relevant metabolites" may therefore penetrate to water sources used for abstraction of drinking water and may thus ultimately be present in drinking water. The presence of "non-relevant metabolites" and similar trace compounds in the water cycle may endanger drinking water quality on a long-term scale. During oxidative drinking water treatment, "non-relevant metabolites" may also serve as the starting material for toxicologically relevant transformation products similar to processes observed by drinking water disinfection with chlorine. This hypothesis was recently confirmed by the detection of the formation of N-nitroso-dimethylamine from ozone and dimethylsulfamide, a "non-relevant metabolite" of the fungicide tolylfluanide. In order to keep drinking water preferably free of "non-relevant metabolites", the German drinking water advisory board of the Federal Ministry of Health supports limiting their penetration into raw and drinking water to the functionally (agriculturally) unavoidable extent. On this background, the German Federal Environment Agency (UBA) recently has recommended two health related indication values (HRIV) to assess "non-relevant metabolites" from the view of drinking water hygiene. Considering the sometimes incomplete toxicological data base for some "non-relevant metabolites", HRIV also have the role of health related precautionary values. Depending on the completeness and quality of the toxicological

  20. Inference of Altimeter Accuracy on Along-track Gravity Anomaly Recovery

    Directory of Open Access Journals (Sweden)

    LI Yang

    2015-04-01

    Full Text Available A correlation model between along-track gravity anomaly accuracy, spatial resolution and altimeter accuracy is proposed. This new model is based on along-track gravity anomaly recovery and resolution estimation. Firstly, an error propagation formula of along-track gravity anomaly is derived from the principle of satellite altimetry. Then the mathematics between the SNR (signal to noise ratio and cross spectral coherence is deduced. The analytical correlation between altimeter accuracy and spatial resolution is finally obtained from the results above. Numerical simulation results show that along-track gravity anomaly accuracy is proportional to altimeter accuracy, while spatial resolution has a power relation with altimeter accuracy. e.g., with altimeter accuracy improving m times, gravity anomaly accuracy improves m times while spatial resolution improves m0.4644 times. This model is verified by real-world data.

  1. The spatial accuracy of geographic ecological momentary assessment (GEMA): Error and bias due to subject and environmental characteristics.

    Science.gov (United States)

    Mennis, Jeremy; Mason, Michael; Ambrus, Andreea; Way, Thomas; Henry, Kevin

    2017-09-01

    Geographic ecological momentary assessment (GEMA) combines ecological momentary assessment (EMA) with global positioning systems (GPS) and geographic information systems (GIS). This study evaluates the spatial accuracy of GEMA location data and bias due to subject and environmental data characteristics. Using data for 72 subjects enrolled in a study of urban adolescent substance use, we compared the GPS-based location of EMA responses in which the subject indicated they were at home to the geocoded home address. We calculated the percentage of EMA locations within a sixteenth, eighth, quarter, and half miles from the home, and the percentage within the same tract and block group as the home. We investigated if the accuracy measures were associated with subject demographics, substance use, and emotional dysregulation, as well as environmental characteristics of the home neighborhood. Half of all subjects had more than 88% of their EMA locations within a half mile, 72% within a quarter mile, 55% within an eighth mile, 50% within a sixteenth of a mile, 83% in the correct tract, and 71% in the correct block group. There were no significant associations with subject or environmental characteristics. Results support the use of GEMA for analyzing subjects' exposures to urban environments. Researchers should be aware of the issue of spatial accuracy inherent in GEMA, and interpret results accordingly. Understanding spatial accuracy is particularly relevant for the development of 'ecological momentary interventions' (EMI), which may depend on accurate location information, though issues of privacy protection remain a concern. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. The Limits to Relevance

    Science.gov (United States)

    Averill, M.; Briggle, A.

    2006-12-01

    Science policy and knowledge production lately have taken a pragmatic turn. Funding agencies increasingly are requiring scientists to explain the relevance of their work to society. This stems in part from mounting critiques of the "linear model" of knowledge production in which scientists operating according to their own interests or disciplinary standards are presumed to automatically produce knowledge that is of relevance outside of their narrow communities. Many contend that funded scientific research should be linked more directly to societal goals, which implies a shift in the kind of research that will be funded. While both authors support the concept of useful science, we question the exact meaning of "relevance" and the wisdom of allowing it to control research agendas. We hope to contribute to the conversation by thinking more critically about the meaning and limits of the term "relevance" and the trade-offs implicit in a narrow utilitarian approach. The paper will consider which interests tend to be privileged by an emphasis on relevance and address issues such as whose goals ought to be pursued and why, and who gets to decide. We will consider how relevance, narrowly construed, may actually limit the ultimate utility of scientific research. The paper also will reflect on the worthiness of research goals themselves and their relationship to a broader view of what it means to be human and to live in society. Just as there is more to being human than the pragmatic demands of daily life, there is more at issue with knowledge production than finding the most efficient ways to satisfy consumer preferences or fix near-term policy problems. We will conclude by calling for a balanced approach to funding research that addresses society's most pressing needs but also supports innovative research with less immediately apparent application.

  3. Quality of reporting of diagnostic accuracy studies

    NARCIS (Netherlands)

    Smidt, N.; Rutjes, A.W.; Windt - Mens, van der D.A.W.M.; Ostelo, R.W.J.G.; Reitsma, J.B.; Bouter, L.M.; Vet, de H.C.W.

    2005-01-01

    PURPOSE: To evaluate quality of reporting in diagnostic accuracy articles published in 2000 in journals with impact factor of at least 4 by using items of Standards for Reporting of Diagnostic Accuracy (STARD) statement published later in 2003. MATERIALS AND METHODS: English-language articles on

  4. Nostalgia's place among self-relevant emotions.

    Science.gov (United States)

    van Tilburg, Wijnand A P; Wildschut, Tim; Sedikides, Constantine

    2017-07-24

    How is nostalgia positioned among self-relevant emotions? We tested, in six studies, which self-relevant emotions are perceived as most similar versus least similar to nostalgia, and what underlies these similarities/differences. We used multidimensional scaling to chart the perceived similarities/differences among self-relevant emotions, resulting in two-dimensional models. The results were revealing. Nostalgia is positioned among self-relevant emotions characterised by positive valence, an approach orientation, and low arousal. Nostalgia most resembles pride and self-compassion, and least resembles embarrassment and shame. Our research pioneered the integration of nostalgia among self-relevant emotions.

  5. Do technical parameters affect the diagnostic accuracy of virtual bronchoscopy in patients with suspected airways stenosis?

    International Nuclear Information System (INIS)

    Jones, Catherine M.; Athanasiou, Thanos; Nair, Sujit; Aziz, Omer; Purkayastha, Sanjay; Konstantinos, Vlachos; Paraskeva, Paraskevas; Casula, Roberto; Glenville, Brian; Darzi, Ara

    2005-01-01

    Purpose: Virtual bronchoscopy has gained popularity over the past decade as an alternative investigation to conventional bronchoscopy in the diagnosis, grading and monitoring of airway disease. The effect of technical parameters on diagnostic outcome from virtual bronchoscopy has not been determined. This meta-analysis aims to estimate accuracy of virtual compared to conventional bronchoscopy in patients with suspected airway stenosis, and evaluate the influence of technical parameters. Materials and methods: A MEDLINE search was used to identify relevant published studies. The primary endpoint was the 'correct diagnosis' of stenotic lesions on virtual compared to conventional bronchoscopy. Secondary endpoints included the effects of the technical parameters (pitch, collimation, reconstruction interval, rendering method, and scanner type), and date of publication on the diagnostic accuracy of virtual bronchoscopy. Results: Thirteen studies containing 454 patients were identified. Meta-analysis showed good overall diagnostic performance with 85% calculated pooled sensitivity (95% CI 77-91%), 87% specificity (95% CI 81-92%) and area under the curve (AUC) of 0.947. Subgroups included collimation of 3 mm or more (AUC 0.948), pitch of 1 (AUC 0.955), surface rendering technique (AUC 0.935), and reconstruction interval of more than 1.25 mm (AUC 0.914). There was no significant difference in accuracy accounting for publication date, scanner type or any of the above variables. Weighted regression analysis confirmed none of these variables could significantly account for study heterogeneity. Conclusion: Virtual bronchoscopy performs well in the investigation of patients with suspected airway stenosis. Overall sensitivity and specificity and diagnostic odds ratio for diagnosis of airway stenosis were high. The effects of pitch, collimation, reconstruction interval, rendering technique, scanner type, and publication date on diagnostic accuracy were not significant

  6. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data.

    Science.gov (United States)

    Januel, Jean-Marie; Luthi, Jean-Christophe; Quan, Hude; Borst, François; Taffé, Patrick; Ghali, William A; Burnand, Bernard

    2011-08-18

    Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

  7. Analysis of spatial distribution of land cover maps accuracy

    Science.gov (United States)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain

  8. High accuracy FIONA-AFM hybrid imaging

    International Nuclear Information System (INIS)

    Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.

    2011-01-01

    Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.

  9. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  10. Relevance theory: pragmatics and cognition.

    Science.gov (United States)

    Wearing, Catherine J

    2015-01-01

    Relevance Theory is a cognitively oriented theory of pragmatics, i.e., a theory of language use. It builds on the seminal work of H.P. Grice(1) to develop a pragmatic theory which is at once philosophically sensitive and empirically plausible (in both psychological and evolutionary terms). This entry reviews the central commitments and chief contributions of Relevance Theory, including its Gricean commitment to the centrality of intention-reading and inference in communication; the cognitively grounded notion of relevance which provides the mechanism for explaining pragmatic interpretation as an intention-driven, inferential process; and several key applications of the theory (lexical pragmatics, metaphor and irony, procedural meaning). Relevance Theory is an important contribution to our understanding of the pragmatics of communication. © 2014 John Wiley & Sons, Ltd.

  11. Relevant Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2009-01-01

    Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace...... clusters. Their results are typically highly redundant, i.e. many clusters are detected multiple times in several projections. In this work, we propose a novel model for relevant subspace clustering (RESCU). We present a global optimization which detects the most interesting non-redundant subspace clusters...... achieves top clustering quality while competing approaches show greatly varying performance....

  12. Hospital to Post-Acute Care Facility Transfers: Identifying Targets for Information Exchange Quality Improvement.

    Science.gov (United States)

    Jones, Christine D; Cumbler, Ethan; Honigman, Benjamin; Burke, Robert E; Boxer, Rebecca S; Levy, Cari; Coleman, Eric A; Wald, Heidi L

    2017-01-01

    Information exchange is critical to high-quality care transitions from hospitals to post-acute care (PAC) facilities. We conducted a survey to evaluate the completeness and timeliness of information transfer and communication between a tertiary-care academic hospital and its related PAC facilities. This was a cross-sectional Web-based 36-question survey of 110 PAC clinicians and staff representing 31 PAC facilities conducted between October and December 2013. We received responses from 71 of 110 individuals representing 29 of 31 facilities (65% and 94% response rates). We collapsed 4-point Likert responses into dichotomous variables to reflect completeness (sufficient vs insufficient) and timeliness (timely vs not timely) for information transfer and communication. Among respondents, 32% reported insufficient information about discharge medical conditions and management plan, and 83% reported at least occasionally encountering problems directly related to inadequate information from the hospital. Hospital clinician contact information was the most common insufficient domain. With respect to timeliness, 86% of respondents desired receipt of a discharge summary on or before the day of discharge, but only 58% reported receiving the summary within this time frame. Through free-text responses, several participants expressed the need for paper prescriptions for controlled pain medications to be sent with patients at the time of transfer. Staff and clinicians at PAC facilities perceive substantial deficits in content and timeliness of information exchange between the hospital and facilities. Such deficits are particularly relevant in the context of the increasing prevalence of bundled payments for care across settings as well as forthcoming readmissions penalties for PAC facilities. Targets identified for quality improvement include structuring discharge summary information to include information identified as deficient by respondents, completion of discharge summaries

  13. On the Accuracy of Ancestral Sequence Reconstruction for Ultrametric Trees with Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2018-04-01

    We examine a mathematical question concerning the reconstruction accuracy of the Fitch algorithm for reconstructing the ancestral sequence of the most recent common ancestor given a phylogenetic tree and sequence data for all taxa under consideration. In particular, for the symmetric four-state substitution model which is also known as Jukes-Cantor model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that for any ultrametric phylogenetic tree and a symmetric model, the Fitch parsimony method using all terminal taxa is more accurate, or at least as accurate, for ancestral state reconstruction than using any particular terminal taxon or any particular pair of taxa. This conjecture had so far only been answered for two-state data by Fischer and Thatte. Here, we focus on answering the biologically more relevant case with four states, which corresponds to ancestral sequence reconstruction from DNA or RNA data.

  14. Accuracy of Parent Identification of Stuttering Occurrence

    Science.gov (United States)

    Einarsdottir, Johanna; Ingham, Roger

    2009-01-01

    Background: Clinicians rely on parents to provide information regarding the onset and development of stuttering in their own children. The accuracy and reliability of their judgments of stuttering is therefore important and is not well researched. Aim: To investigate the accuracy of parent judgements of stuttering in their own children's speech…

  15. Forecasting of integral parameters of solar cosmic ray events according to initial characteristics of an event

    International Nuclear Information System (INIS)

    Belovskij, M.N.; Ochelkov, Yu.P.

    1981-01-01

    The forecasting method for an integral proton flux of solar cosmic rays (SCR) based on the initial characteristics of the phe-- nomenon is proposed. The efficiency of the method is grounded. The accuracy of forecasting is estimated and the retrospective forecasting of real events is carried out. The parameters of the universal function describing the time progress of the SCR events are pre-- sented. The proposed method is suitable for forecasting practically all the SCR events. The timeliness of the given forecasting is not worse than that of the forecasting based on utilization of the SCR propagation models [ru

  16. Reliability and accuracy of Crystaleye spectrophotometric system.

    Science.gov (United States)

    Chen, Li; Tan, Jian Guo; Zhou, Jian Feng; Yang, Xu; Du, Yang; Wang, Fang Ping

    2010-01-01

    to develop an in vitro shade-measuring model to evaluate the reliability and accuracy of the Crystaleye spectrophotometric system, a newly developed spectrophotometer. four shade guides, VITA Classical, VITA 3D-Master, Chromascop and Vintage Halo NCC, were measured with the Crystaleye spectrophotometer in a standardised model, ten times for 107 shade tabs. The shade-matching results and the CIE L*a*b* values of the cervical, body and incisal regions for each measurement were automatically analysed using the supporting software. Reliability and accuracy were calculated for each shade tab both in percentage and in colour difference (ΔE). Difference was analysed by one-way ANOVA in the cervical, body and incisal regions. range of reliability was 88.81% to 98.97% and 0.13 to 0.24 ΔE units, and that of accuracy was 44.05% to 91.25% and 1.03 to 1.89 ΔE units. Significant differences in reliability and accuracy were found between the body region and the cervical and incisal regions. Comparisons made among regions and shade guides revealed that evaluation in ΔE was prone to disclose the differences. measurements with the Crystaleye spectrophotometer had similar, high reliability in different shade guides and regions, indicating predictable repeated measurements. Accuracy in the body region was high and less variable compared with the cervical and incisal regions.

  17. Profiles of Dialogue for Relevance

    Directory of Open Access Journals (Sweden)

    Douglas Walton

    2016-12-01

    Full Text Available This paper uses argument diagrams, argumentation schemes, and some tools from formal argumentation systems developed in artificial intelligence to build a graph-theoretic model of relevance shown to be applicable (with some extensions as a practical method for helping a third party judge issues of relevance or irrelevance of an argument in real examples. Examples used to illustrate how the method works are drawn from disputes about relevance in natural language discourse, including a criminal trial and a parliamentary debate.

  18. Science and the struggle for relevance

    NARCIS (Netherlands)

    Hessels, L.K.|info:eu-repo/dai/nl/304832863

    2010-01-01

    This thesis deals with struggles for relevance of university researchers, their efforts to make their work correspond with ruling standards of relevance and to influence these standards. Its general research question is: How to understand changes in the struggle for relevance of Dutch academic

  19. Evaluating measurement accuracy a practical approach

    CERN Document Server

    Rabinovich, Semyon G

    2017-01-01

    This book presents a systematic and comprehensive exposition of the theory of measurement accuracy and provides solutions that fill significant and long-standing gaps in the classical theory. It eliminates the shortcomings of the classical theory by including methods for estimating accuracy of single measurements, the most common type of measurement. The book also develops methods of reduction and enumeration for indirect measurements, which do not require Taylor series and produce a precise solution to this problem. It produces grounded methods and recommendations for summation of errors. The monograph also analyzes and critiques two foundation metrological documents, the International Vocabulary of Metrology (VIM) and the Guide to the Expression of Uncertainty in Measurement (GUM), and discusses directions for their revision. This new edition adds a step-by-step guide on how to evaluate measurement accuracy and recommendations on how to calculate systematic error of multiple measurements. There is also an e...

  20. Social class, contextualism, and empathic accuracy.

    Science.gov (United States)

    Kraus, Michael W; Côté, Stéphane; Keltner, Dacher

    2010-11-01

    Recent research suggests that lower-class individuals favor explanations of personal and political outcomes that are oriented to features of the external environment. We extended this work by testing the hypothesis that, as a result, individuals of a lower social class are more empathically accurate in judging the emotions of other people. In three studies, lower-class individuals (compared with upper-class individuals) received higher scores on a test of empathic accuracy (Study 1), judged the emotions of an interaction partner more accurately (Study 2), and made more accurate inferences about emotion from static images of muscle movements in the eyes (Study 3). Moreover, the association between social class and empathic accuracy was explained by the tendency for lower-class individuals to explain social events in terms of features of the external environment. The implications of class-based patterns in empathic accuracy for well-being and relationship outcomes are discussed.

  1. 12 CFR 740.2 - Accuracy of advertising.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and other...

  2. Diagnostic accuracy of MRCP in choledocholithiasis

    International Nuclear Information System (INIS)

    Guarise, Alessandro; Mainardi, Paride; Baltieri, Susanna; Faccioli, Niccolo'

    2005-01-01

    Purpose: To evaluate the accuracy of MRCP in diagnosing choledocholithiasis considering Endoscopic Retrograde Cholangiopancreatography (ERCP) as the gold standard. To compare the results achieved during the first two years of use (1999-2000) of Magnetic Resonance Cholangiopancreatography (MRCP) in patients with suspected choledocholithiasis with those achieved during the following two years (2001-2002) in order to establish the repeatability and objectivity of MRCP results. Materials and methods: One hundred and seventy consecutive patients underwent MRCP followed by ERCP within 72 h. In 22/170 (13%) patients ERCP was unsuccessful for different reasons. MRCP was performed using a 1.5 T magnet with both multi-slice HASTE sequences and thick-slice projection technique. Choledocholithiasis was diagnosed in the presence of signal void images in the dependent portion of the duct surrounded by hyperintense bile and detected at least in two projections. The MRCP results, read independently from the ERCP results, were compared in two different and subsequent periods. Results: ERCP confirmed choledocholithiasis in 87 patients. In these cases the results of MRCP were the following: 78 true positives, 53 true negatives, 7 false positives, and 9 false negatives. The sensitivity, specificity and accuracy were 90%, 88% and 89%, respectively. After the exclusion of stones with diameters smaller than 6 mm, the sensitivity, specificity and accuracy were 100%, 99% and 99%, respectively. MRCP accuracy was related to the size of the stones. There was no significant statistical difference between the results obtained in the first two-year period and those obtained in the second period. Conclusions: MRCP i sufficiently accurate to replace ERCP in patients with suspected choledocholithiasis. The results are related to the size of stones. The use of well-defined radiological signs allows good diagnostic accuracy independent of the learning curve [it

  3. [Navigation in implantology: Accuracy assessment regarding the literature].

    Science.gov (United States)

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  4. ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    G. Han

    2016-06-01

    Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  5. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  6. Studies on the diagnostic accuracy of lymphography

    International Nuclear Information System (INIS)

    Luening, M.; Stargardt, A.; Abet, L.

    1979-01-01

    Contradictory reports in the literature on the reliability of lymphography stimulated the authors to test the diagnostic accuracy, employing methods which are approximately analogous to practice, using carcinoma of the cervix as the model on which the study was carried out. Using 21 observers it was found that there was no correlation between their experience and on-target accuracy of the diagnosis. Good observers obtained an accuracy of 85% with good proportions between sensitivity in the recognition of detail, specificity and readiness to arrive at a decision on the basis of discriminatory ability. With the help of the concept of the ROC curves, the position taken up by the observers in respect of diagnostic decisions, and a complex manner of assessing the various characteristic factors determining diagnostic accuracy, are demonstrated. This form of test, which permits manipulation of different variants of diagnosis, is recommended, among other things, for performance control at the end of training and continuing education courses in other fields of x-ray diagnosis as well. (orig.) [de

  7. Culturally Relevant Cyberbullying Prevention

    OpenAIRE

    Phillips, Gregory John

    2017-01-01

    In this action research study, I, along with a student intervention committee of 14 members, developed a cyberbullying intervention for a large urban high school on the west coast. This high school contained a predominantly African American student population. I aimed to discover culturally relevant cyberbullying prevention strategies for African American students. The intervention committee selected video safety messages featuring African American actors as the most culturally relevant cyber...

  8. Reactor dosimetry integral reaction rate data in LMFBR Benchmark and standard neutron fields: status, accuracy and implications

    International Nuclear Information System (INIS)

    Fabry, A.; Ceulemans, H.; Vandeplas, P.; McElroy, W.N.; Lippincott, E.P.

    1977-01-01

    This paper provides conclusions that may be drawn regarding the consistency and accuracy of dosimetry cross-section files on the basis of integral reaction rate data measured in U.S. and European benchmark and standard neutron fields. In a discussion of the major experimental facilities CFRMF (Idaho Falls), BIGTEN (Los Alamos), ΣΣ (Mol, Bucharest), NISUS (London), TAPIRO (Roma), FISSION SPECTRA (NBS, Mol, PTB), attention is paid to quantifying the sensitivity of computed integral data relative to the presently evaluated accuracy of the various neutron spectral distributions. The status of available integral data is reviewed and the assigned uncertainties are appraised, including experience gained by interlaboratory comparisons. For all reactions studied and for the various neutron fields, the measured integral data are compared to the ones computed from the ENDF/B-IV and the SAND-II dosimetry cross-section libraries as well as to some other differential data in relevant cases. This comparison, together with the proposed sensitivity and accuracy assessments, is used, whenever possible, to establish how well the best cross-sections evaluated on the basis of differential measurements (category I dosimetry reactions) are reliable in terms of integral reaction rates prediction and, for those reactions for which discrepancies are indicated, in which energy range it is presumed that additional differential measurements might help. For the other reactions (category II), the inconsistencies and trends are examined. The need for further integral measurements and interlaboratory comparisons is also considered

  9. The accuracy of ultrasonography for the evaluation of portal hypertension in patients with cirrhosis: A systematic review

    International Nuclear Information System (INIS)

    Kim, Gaeun; Cho, Youn Zoo; Baik, Soon Koo; Kim, Moon Young; Hong, Won Ki; Kwon, Sang Ok

    2015-01-01

    Studies have presented conflicting results regarding the accuracy of ultrasonography (US) for diagnosing portal hypertension (PH). We sought to identify evidence in the literature regarding the accuracy of US for assessing PH in patients with liver cirrhosis. We conducted a systematic review by searching databases, including MEDLINE, EMBASE, and the Cochrane Library, for relevant studies. A total of 14 studies met our inclusion criteria. The US indices were obtained in the portal vein (n = 9), hepatic artery (n = 6), hepatic vein (HV) (n = 4) and other vessels. Using hepatic venous pressure gradient (HVPG) as the reference, the sensitivity (Se) and specificity (Sp) of the portal venous indices were 69-88% and 67-75%, respectively. The correlation coefficients between HVPG and the portal venous indices were approximately 0.296-0.8. No studies assess the Se and Sp of the hepatic arterial indices. The correlation between HVPG and the hepatic arterial indices ranged from 0.01 to 0.83. The Se and Sp of the hepatic venous indices were 75.9-77.8% and 81.8-100%, respectively. In particular, the Se and Sp of HV arrival time for clinically significant PH were 92.7% and 86.7%, respectively. A statistically significant correlation between HVPG and the hepatic venous indices was observed (0.545-0.649). Some US indices, such as HV, exhibited an increased accuracy for diagnosing PH. These indices may be useful in clinical practice for the detection of significant PH.

  10. The accuracy of ultrasonography for the evaluation of portal hypertension in patients with cirrhosis: A systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Gaeun; Cho, Youn Zoo; Baik, Soon Koo [College of Nursing, Research Institute for Nursing Science, Keimyung Univercity, Daegu (Korea, Republic of); Kim, Moon Young; Hong, Won Ki; Kwon, Sang Ok [Dept. of Internal Medicine, Wonju Severance Christian Hospital, Yonsei University Wonju College of Medicine, Wonju (Korea, Republic of)

    2015-04-15

    Studies have presented conflicting results regarding the accuracy of ultrasonography (US) for diagnosing portal hypertension (PH). We sought to identify evidence in the literature regarding the accuracy of US for assessing PH in patients with liver cirrhosis. We conducted a systematic review by searching databases, including MEDLINE, EMBASE, and the Cochrane Library, for relevant studies. A total of 14 studies met our inclusion criteria. The US indices were obtained in the portal vein (n = 9), hepatic artery (n = 6), hepatic vein (HV) (n = 4) and other vessels. Using hepatic venous pressure gradient (HVPG) as the reference, the sensitivity (Se) and specificity (Sp) of the portal venous indices were 69-88% and 67-75%, respectively. The correlation coefficients between HVPG and the portal venous indices were approximately 0.296-0.8. No studies assess the Se and Sp of the hepatic arterial indices. The correlation between HVPG and the hepatic arterial indices ranged from 0.01 to 0.83. The Se and Sp of the hepatic venous indices were 75.9-77.8% and 81.8-100%, respectively. In particular, the Se and Sp of HV arrival time for clinically significant PH were 92.7% and 86.7%, respectively. A statistically significant correlation between HVPG and the hepatic venous indices was observed (0.545-0.649). Some US indices, such as HV, exhibited an increased accuracy for diagnosing PH. These indices may be useful in clinical practice for the detection of significant PH.

  11. Increasing of AC compensation method accuracy

    International Nuclear Information System (INIS)

    Havlicek, V.; Pokorny, M.

    2003-01-01

    The original MMF compensation method allows the magnetic properties of single sheets and strips to be measured in the same way as the closed specimen properties. The accuracy of the method is limited due to the finite gain of the feedback loop fulfilling the condition of its stability. Digitalisation of the compensation loop appropriate processing of the error signal can rapidly improve the accuracy. The basic ideas of this new approach and the experimental results are described in this paper

  12. Increasing of AC compensation method accuracy

    Science.gov (United States)

    Havlíček, V.; Pokorný, M.

    2003-01-01

    The original MMF compensation method allows the magnetic properties of single sheets and strips to be measured in the same way as the closed specimen properties. The accuracy of the method is limited due to the finite gain of the feedback loop fulfilling the condition of its stability. Digitalisation of the compensation loop appropriate processing of the error signal can rapidly improve the accuracy. The basic ideas of this new approach and the experimental results are described in this paper.

  13. Electron ray tracing with high accuracy

    International Nuclear Information System (INIS)

    Saito, K.; Okubo, T.; Takamoto, K.; Uno, Y.; Kondo, M.

    1986-01-01

    An electron ray tracing program is developed to investigate the overall geometrical and chromatic aberrations in electron optical systems. The program also computes aberrations due to manufacturing errors in lenses and deflectors. Computation accuracy is improved by (1) calculating electrostatic and magnetic scalar potentials using the finite element method with third-order isoparametric elements, and (2) solving the modified ray equation which the aberrations satisfy. Computation accuracy of 4 nm is achieved for calculating optical properties of the system with an electrostatic lens

  14. Bias associated with delayed verification in test accuracy studies: accuracy of tests for endometrial hyperplasia may be much higher than we think!

    OpenAIRE

    Clark, T Justin; ter Riet, Gerben; Coomarasamy, Aravinthan; Khan, Khalid S

    2004-01-01

    Abstract Background To empirically evaluate bias in estimation of accuracy associated with delay in verification of diagnosis among studies evaluating tests for predicting endometrial hyperplasia. Methods Systematic reviews of all published research on accuracy of miniature endometrial biopsy and endometr ial ultrasonography for diagnosing endometrial hyperplasia identified 27 test accuracy studies (2,982 subjects). Of these, 16 had immediate histological verification of diagnosis while 11 ha...

  15. Analysis and research of influence factors ranking of fuzzy language translation accuracy in literary works based on catastrophe progression method

    Directory of Open Access Journals (Sweden)

    Wei Dong

    2017-02-01

    Full Text Available This paper researches the problem of decline in translation accuracy caused by language “vagueness” in literary translation, and proposes to use the catastrophe model for importance ranking of various factors affecting the fuzzy language translation accuracy in literary works, and finally gives out the order of factors to be considered before translation. The multi-level evaluation system can be used to construct the relevant catastrophe progression model, and the normalization formula can be used to calculate the relative membership degree of each system and evaluation index, and make evaluation combined with the evaluation criteria table. The results show that, in the fuzzy language translation, in order to improve the translation accuracy, there is a need to consider the indicators ranking: A2 fuzzy language context → A1 words attribute → A3 specific meaning of digital words; B2 fuzzy semantics, B3 blur color words → B1 multiple meanings of words → B4 fuzzy digital words; C3 combination with context and cultural background, C4 specific connotation of color words → C1 combination with words emotion, C2 selection of words meaning → C5 combination with digits and language background.

  16. A review on the processing accuracy of two-photon polymerization

    Directory of Open Access Journals (Sweden)

    Xiaoqin Zhou

    2015-03-01

    Full Text Available Two-photon polymerization (TPP is a powerful and potential technology to fabricate true three-dimensional (3D micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  17. A review on the processing accuracy of two-photon polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xiaoqin; Hou, Yihong [School of Mechanical Science and Engineering, Jilin University, Changchun, 130022 (China); Lin, Jieqiong, E-mail: linjieqiong@mail.ccut.edu.cn [School of Electromechanical Engineering, Changchun University of Technology, Changchun, 130012 (China)

    2015-03-15

    Two-photon polymerization (TPP) is a powerful and potential technology to fabricate true three-dimensional (3D) micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  18. Are multiple-trial experiments appropriate for eyewitness identification studies? Accuracy, choosing, and confidence across trials.

    Science.gov (United States)

    Mansour, J K; Beaudry, J L; Lindsay, R C L

    2017-12-01

    Eyewitness identification experiments typically involve a single trial: A participant views an event and subsequently makes a lineup decision. As compared to this single-trial paradigm, multiple-trial designs are more efficient, but significantly reduce ecological validity and may affect the strategies that participants use to make lineup decisions. We examined the effects of a number of forensically relevant variables (i.e., memory strength, type of disguise, degree of disguise, and lineup type) on eyewitness accuracy, choosing, and confidence across 12 target-present and 12 target-absent lineup trials (N = 349; 8,376 lineup decisions). The rates of correct rejections and choosing (across both target-present and target-absent lineups) did not vary across the 24 trials, as reflected by main effects or interactions with trial number. Trial number had a significant but trivial quadratic effect on correct identifications (OR = 0.99) and interacted significantly, but again trivially, with disguise type (OR = 1.00). Trial number did not significantly influence participants' confidence in correct identifications, confidence in correct rejections, or confidence in target-absent selections. Thus, multiple-trial designs appear to have minimal effects on eyewitness accuracy, choosing, and confidence. Researchers should thus consider using multiple-trial designs for conducting eyewitness identification experiments.

  19. Ultra-wideband ranging precision and accuracy

    International Nuclear Information System (INIS)

    MacGougan, Glenn; O'Keefe, Kyle; Klukas, Richard

    2009-01-01

    This paper provides an overview of ultra-wideband (UWB) in the context of ranging applications and assesses the precision and accuracy of UWB ranging from both a theoretical perspective and a practical perspective using real data. The paper begins with a brief history of UWB technology and the most current definition of what constitutes an UWB signal. The potential precision of UWB ranging is assessed using Cramer–Rao lower bound analysis. UWB ranging methods are described and potential error sources are discussed. Two types of commercially available UWB ranging radios are introduced which are used in testing. Actual ranging accuracy is assessed from line-of-sight testing under benign signal conditions by comparison to high-accuracy electronic distance measurements and to ranges derived from GPS real-time kinematic positioning. Range measurements obtained in outdoor testing with line-of-sight obstructions and strong reflection sources are compared to ranges derived from classically surveyed positions. The paper concludes with a discussion of the potential applications for UWB ranging

  20. Evolutionary relevance facilitates visual information processing.

    Science.gov (United States)

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  1. A Review of Data Quality Assessment Methods for Public Health Information Systems

    Directory of Open Access Journals (Sweden)

    Hong Chen

    2014-05-01

    Full Text Available High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process.

  2. Improving the quality of clinical coding: a comprehensive audit model

    Directory of Open Access Journals (Sweden)

    Hamid Moghaddasi

    2014-04-01

    Full Text Available Introduction: The review of medical records with the aim of assessing the quality of codes has long been conducted in different countries. Auditing medical coding, as an instructive approach, could help to review the quality of codes objectively using defined attributes, and this in turn would lead to improvement of the quality of codes. Method: The current study aimed to present a model for auditing the quality of clinical codes. The audit model was formed after reviewing other audit models, considering their strengths and weaknesses. A clear definition was presented for each quality attribute and more detailed criteria were then set for assessing the quality of codes. Results: The audit tool (based on the quality attributes included legibility, relevancy, completeness, accuracy, definition and timeliness; led to development of an audit model for assessing the quality of medical coding. Delphi technique was then used to reassure the validity of the model. Conclusion: The inclusive audit model designed could provide a reliable and valid basis for assessing the quality of codes considering more quality attributes and their clear definition. The inter-observer check suggested in the method of auditing is of particular importance to reassure the reliability of coding.

  3. Enhancing spoken connected-digit recognition accuracy by error ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    nition systems have gained acceptable accuracy levels, the accuracy of recognition of current connected ... bar code and ISBN1 library code to name a few. ..... Kopec G, Bush M 1985 Network-based connected-digit recognition. IEEE Trans.

  4. Comparison of Accuracy of Contrast Enhanced Computed Tomography with Accuracy of Non-Contrast Magnetic Resonance Imaging in Evaluation of Local Extension of Base of Tongue Malignancies

    Directory of Open Access Journals (Sweden)

    Ketan Rathod

    2018-01-01

    Full Text Available Diagnosis of base of tongue malignancy can be obtained through clinical examination and biopsy. Magnetic Resonance Imaging (MRI and Computed Tomography (CT are used to detect its local extension, nodal spread and distant metastases. The main aim of study was to compare the accuracy of MRI and contrast enhanced CT in determining the local extent of base of tongue malignancy. Twenty five patients, biopsy proven cases of squamous cell carcinoma of base of tongue were taken. 1.5 Tesla Magnetic Resonance Unit with T2 weighted axial, coronal image; T1 weighted axial, coronal image; and STIR (Short tau inversion recovery axial and coronal images were used. 16 slice Computed Tomography unit with non-contrast and contrast enhanced images were used. Accuracy of CT to detect midline crossing: 50%; accuracy of MRI to detect midline crossing: 100%; accuracy of CT to detect anterior extension: 92%; accuracy of MRI to detect anterior extension: 100%; accuracy of CT to detect tonsillar fossa invasion: 83%; accuracy of MRI to detect tonsillar fossa invasion: 100%; accuracy of CT to detect oro pharyngeal spread: 83%; accuracy of MRI to detect oro pharyngeal spread: 100%; accuracy of CT to detect bone involvement: 20%; accuracy of MRI to detect bone involvement: 100%. MRI proved to be a better investigation than CT, in terms of evaluation of depth of invasion, presence of bony involvement, extension to opposite side, anterior half of tongue, tonsillar fossa, floor of mouth or oropharynx.

  5. Systematic reviews of diagnostic test accuracy

    DEFF Research Database (Denmark)

    Leeflang, Mariska M G; Deeks, Jonathan J; Gatsonis, Constantine

    2008-01-01

    More and more systematic reviews of diagnostic test accuracy studies are being published, but they can be methodologically challenging. In this paper, the authors present some of the recent developments in the methodology for conducting systematic reviews of diagnostic test accuracy studies....... Restrictive electronic search filters are discouraged, as is the use of summary quality scores. Methods for meta-analysis should take into account the paired nature of the estimates and their dependence on threshold. Authors of these reviews are advised to use the hierarchical summary receiver...

  6. An integrated unscented kalman filter and relevance vector regression approach for lithium-ion battery remaining useful life and short-term capacity prediction

    International Nuclear Information System (INIS)

    Zheng, Xiujuan; Fang, Huajing

    2015-01-01

    The gradual decreasing capacity of lithium-ion batteries can serve as a health indicator for tracking the degradation of lithium-ion batteries. It is important to predict the capacity of a lithium-ion battery for future cycles to assess its health condition and remaining useful life (RUL). In this paper, a novel method is developed using unscented Kalman filter (UKF) with relevance vector regression (RVR) and applied to RUL and short-term capacity prediction of batteries. A RVR model is employed as a nonlinear time-series prediction model to predict the UKF future residuals which otherwise remain zero during the prediction period. Taking the prediction step into account, the predictive value through the RVR method and the latest real residual value constitute the future evolution of the residuals with a time-varying weighting scheme. Next, the future residuals are utilized by UKF to recursively estimate the battery parameters for predicting RUL and short-term capacity. Finally, the performance of the proposed method is validated and compared to other predictors with the experimental data. According to the experimental and analysis results, the proposed approach has high reliability and prediction accuracy, which can be applied to battery monitoring and prognostics, as well as generalized to other prognostic applications. - Highlights: • An integrated method is proposed for RUL prediction as well as short-term capacity prediction. • Relevance vector regression model is employed as a nonlinear time-series prediction model. • Unscented Kalman filter is used to recursively update the states for battery model parameters during the prediction. • A time-varying weighting scheme is utilized to improve the accuracy of the RUL prediction. • The proposed method demonstrates high reliability and prediction accuracy.

  7. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  8. Haptic perception accuracy depending on self-produced movement.

    Science.gov (United States)

    Park, Chulwook; Kim, Seonjin

    2014-01-01

    This study measured whether self-produced movement influences haptic perception ability (experiment 1) as well as the factors associated with levels of influence (experiment 2) in racket sports. For experiment 1, the haptic perception accuracy levels of five male table tennis experts and five male novices were examined under two different conditions (no movement vs. movement). For experiment 2, the haptic afferent subsystems of five male table tennis experts and five male novices were investigated in only the self-produced movement-coupled condition. Inferential statistics (ANOVA, t-test) and custom-made devices (shock & vibration sensor, Qualisys Track Manager) of the data were used to determine the haptic perception accuracy (experiment 1, experiment 2) and its association with expertise. The results of this research show that expert-level players acquire higher accuracy with less variability (racket vibration and angle) than novice-level players, especially in their self-produced movement coupled performances. The important finding from this result is that, in terms of accuracy, the skill-associated differences were enlarged during self-produced movement. To explain the origin of this difference between experts and novices, the functional variability of haptic afferent subsystems can serve as a reference. These two factors (self-produced accuracy and the variability of haptic features) as investigated in this study would be useful criteria for educators in racket sports and suggest a broader hypothesis for further research into the effects of the haptic accuracy related to variability.

  9. Accuracy of references and quotations in veterinary journals.

    Science.gov (United States)

    Hinchcliff, K W; Bruce, N J; Powers, J D; Kipp, M L

    1993-02-01

    The accuracy of references and quotations used to substantiate statements of fact in articles published in 6 frequently cited veterinary journals was examined. Three hundred references were randomly selected, and the accuracy of each citation was examined. A subset of 100 references was examined for quotational accuracy; ie, the accuracy with which authors represented the work or assertions of the author being cited. Of the 300 references selected, 295 were located, and 125 major errors were found in 88 (29.8%) of them. Sixty-seven (53.6%) major errors were found involving authors, 12 (9.6%) involved the article title, 14 (11.2%) involved the book or journal title, and 32 (25.6%) involved the volume number, date, or page numbers. Sixty-eight minor errors were detected. The accuracy of 111 quotations from 95 citations in 65 articles was examined. Nine quotations were technical and not classified, 86 (84.3%) were classified as correct, 2 (1.9%) contained minor misquotations, and 14 (13.7%) contained major misquotations. We concluded that misquotations and errors in citations occur frequently in veterinary journals, but at a rate similar to that reported for other biomedical journals.

  10. Assessment Of Accuracies Of Remote-Sensing Maps

    Science.gov (United States)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  11. Diagnostic accuracy of low-dose CT compared with abdominal radiography in non-traumatic acute abdominal pain: prospective study and systematic review.

    Science.gov (United States)

    Alshamari, Muhammed; Norrman, Eva; Geijer, Mats; Jansson, Kjell; Geijer, Håkan

    2016-06-01

    Abdominal radiography is frequently used in acute abdominal non-traumatic pain despite the availability of more advanced diagnostic modalities. This study evaluates the diagnostic accuracy of low-dose CT compared with abdominal radiography, at similar radiation dose levels. Fifty-eight patients were imaged with both methods and were reviewed independently by three radiologists. The reference standard was obtained from the diagnosis in medical records. Sensitivity and specificity were calculated. A systematic review was performed after a literature search, finding a total of six relevant studies including the present. Overall sensitivity with 95 % CI for CT was 75 % (66-83 %) and 46 % (37-56 %) for radiography. Specificity was 87 % (77-94 %) for both methods. In the systematic review the overall sensitivity for CT varied between 75 and 96 % with specificity from 83 to 95 % while the overall sensitivity for abdominal radiography varied between 30 and 77 % with specificity 75 to 88 %. Based on the current study and available evidence, low-dose CT has higher diagnostic accuracy than abdominal radiography and it should, where logistically possible, replace abdominal radiography in the workup of adult patients with acute non-traumatic abdominal pain. • Low-dose CT has a higher diagnostic accuracy than radiography. • A systematic review shows that CT has better diagnostic accuracy than radiography. • Radiography has no place in the workup of acute non-traumatic abdominal pain.

  12. Comparative accuracy of different techniques in planning radiation therapy of breast cancer

    International Nuclear Information System (INIS)

    Bignardi, M.; Frata, P.; Barbera, F.; Moretti, R.

    1991-01-01

    The authors report the results of the analysis of several factors contributing to the accuracy of treatment planning in the radiation therapy of breast cancer. Different techniques (non-radiological vs CT-based) were used for the acquisition of patients' data; different methods (manual vs computerized) were employed for dose calculation. As for geometric parameters describing the external outline and target volume, mean differences were lower than 4%. Switching from a completely manual method to a CT-based one with computerized calculation, a 3.56% mean decrease in the value of reference isodose (p<0.01) was observed, togheter with a 3.87% mean increase in the estimated inhomogeneity (p<0.001). The non-CT-based outline of target volume exhibited geographic missing of inner portions of the target in 8/16 patients. Our results demonstarte that treatment planning procedures can be a significant source of clinically relevant inaccuracy, which may affect treatment outcome and tumor control

  13. Evolutionary Relevance Facilitates Visual Information Processing

    Directory of Open Access Journals (Sweden)

    Russell E. Jackson

    2013-07-01

    Full Text Available Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  14. Accuracy Assessment in Determining the Location of Corners of Building Structures Using a Combination of Various Measurement Methods

    Science.gov (United States)

    Krzyżek, Robert; Przewięźlikowska, Anna

    2017-12-01

    When surveys of corners of building structures are carried out, surveyors frequently use a compilation of two surveying methods. The first one involves the determination of several corners with reference to a geodetic control using classical methods of surveying field details. The second method relates to the remaining corner points of a structure, which are determined in sequence from distance-distance intersection, using control linear values of the wall faces of the building, the so-called tie distances. This paper assesses the accuracy of coordinates of corner points of a building structure, determined using the method of distance-distance intersection, based on the corners which had previously been determined by the conducted surveys tied to a geodetic control. It should be noted, however, that such a method of surveying the corners of building structures from linear measures is based on the details of the first-order accuracy, while the regulations explicitly allow such measurement only for the details of the second- and third-order accuracy. Therefore, a question arises whether this legal provision is unfounded, or whether surveyors are acting not only against the applicable standards but also without due diligence while performing surveys? This study provides answers to the formulated problem. The main purpose of the study was to verify whether the actual method which is used in practice for surveying building structures allows to obtain the required accuracy of coordinates of the points being determined, or whether it should be strictly forbidden. The results of the conducted studies clearly demonstrate that the problem is definitely more complex. Eventually, however, it might be assumed that assessment of the accuracy in determining a location of corners of a building using a combination of two different surveying methods will meet the requirements of the regulation [MIA, 2011), subject to compliance with relevant baseline criteria, which have been

  15. Laser measuring scanners and their accuracy limits

    Science.gov (United States)

    Jablonski, Ryszard

    1993-09-01

    Scanning methods have gained the greater importance for some years now due to a short measuring time and wide range of application in flexible manufacturing processes. This paper is a summing up of the autho?s creative scientific work in the field of measuring scanners. The research conducted allowed to elaborate the optimal configurations of measuring systems based on the scanning method. An important part of the work was the analysis of a measuring scanner - as a transducer of an angle rotation into the linear displacement which resulted in obtaining its much higher accuracy and finally in working out a measuring scanner eliminating the use of an additional reference standard. The completion of the work is an attempt to determine an attainable accuracy limit of scanning measurement of both length and angle. Using a high stability deflector and a corrected scanning lens one can obtain the angle determination over 30 (or 2 mm) to an accuracy 0 (or 0 tm) when the measuring rate is 1000 Hz or the range d60 (4 mm) with accuracy 0 " (0 jim) and measurement frequency 6 Hz.

  16. High accuracy autonomous navigation using the global positioning system (GPS)

    Science.gov (United States)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  17. Quantifying TOLNet Ozone Lidar Accuracy During the 2014 DISCOVER-AQ and FRAPPE Campaigns

    Science.gov (United States)

    Wang, Lihua; Newchurch, Michael J.; Alvarez, Raul J., II; Berkoff, Timothy A.; Brown, Steven S.; Carrion, William; De Young, Russell J.; Johnson, Bryan J.; Ganoe, Rene; Gronoff, Guillaume; hide

    2017-01-01

    The Tropospheric Ozone Lidar Network (TOLNet) is a unique network of lidar systems that measure high-resolution atmospheric profiles of ozone. The accurate characterization of these lidars is necessary to determine the uniformity of the network calibration. From July to August 2014, three lidars, the TROPospheric OZone (TROPOZ) lidar, the Tunable Optical Profiler for Aerosol and oZone (TOPAZ) lidar, and the Langley Mobile Ozone Lidar (LMOL), of TOLNet participated in the Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) mission and the Front Range Air Pollution and Photochemistry Experiment (FRAPPA) to measure ozone variations from the boundary layer to the top of the troposphere. This study presents the analysis of the intercomparison between the TROPOZ, TOPAZ, and LMOL lidars, along with comparisons between the lidars and other in situ ozone instruments including ozonesondes and a P-3B airborne chemiluminescence sensor. The TOLNet lidars measured vertical ozone structures with an accuracy generally better than +/-15 % within the troposphere. Larger differences occur at some individual altitudes in both the near-field and far-field range of the lidar systems, largely as expected. In terms of column average, the TOLNet lidars measured ozone with an accuracy better than +/-5 % for both the intercomparison between the lidars and between the lidars and other instruments. These results indicate that these three TOLNet lidars are suitable for use in air quality, satellite validation, and ozone modeling efforts.

  18. Estimating alcohol content of traditional brew in Western Kenya using culturally relevant methods: the case for cost over volume.

    Science.gov (United States)

    Papas, Rebecca K; Sidle, John E; Wamalwa, Emmanuel S; Okumu, Thomas O; Bryant, Kendall L; Goulet, Joseph L; Maisto, Stephen A; Braithwaite, R Scott; Justice, Amy C

    2010-08-01

    Traditional homemade brew is believed to represent the highest proportion of alcohol use in sub-Saharan Africa. In Eldoret, Kenya, two types of brew are common: chang'aa, spirits, and busaa, maize beer. Local residents refer to the amount of brew consumed by the amount of money spent, suggesting a culturally relevant estimation method. The purposes of this study were to analyze ethanol content of chang'aa and busaa; and to compare two methods of alcohol estimation: use by cost, and use by volume, the latter the current international standard. Laboratory results showed mean ethanol content was 34% (SD = 14%) for chang'aa and 4% (SD = 1%) for busaa. Standard drink unit equivalents for chang'aa and busaa, respectively, were 2 and 1.3 (US) and 3.5 and 2.3 (Great Britain). Using a computational approach, both methods demonstrated comparable results. We conclude that cost estimation of alcohol content is more culturally relevant and does not differ in accuracy from the international standard.

  19. Computed tomography angiogram. Accuracy in renal surgery

    International Nuclear Information System (INIS)

    Rabah, Danny M.; Al-Hathal, Naif; Al-Fuhaid, Turki; Raza, Sayed; Al-Yami, Fahad; Al-Taweel, Waleed; Alomar, Mohamed; Al-Nagshabandi, Nizar

    2009-01-01

    The objective of this study was to determine the sensitivity and specificity of computed tomography angiogram (CTA) in detecting number and location of renal arteries and veins as well as crossing vessels causing uretero-pelvic junction obstruction (UPJO), and to determine if this can be used in decision-making algorithms for treatment of UPJO. A prospective study was carried out in patients undergoing open, laparoscopic and robotic renal surgery from April 2005 until October 2006. All patients were imaged using CTA with 1.25 collimation of arterial and venous phases. Each multi-detector CTA was then read by one radiologist and his results were compared prospectively with the actual intra-operative findings. Overall, 118 patients were included. CTA had 93% sensitivity, 77% specificity and 90% overall accuracy for detecting a single renal artery, and 76% sensitivity, 92% specificity and 90% overall accuracy for detecting two or more renal arteries (Pearson χ 2 =0.001). There was 95% sensitivity, 84% specificity and 85% overall accuracy for detecting the number of renal veins. CTA had 100% overall accuracy in detecting early dividing renal artery (defined as less than 1.5 cm branching from origin), and 83.3% sensitivity, specificity and overall accuracy in detecting crossing vessels at UPJ. The percentage of surgeons stating CTA to be helpful as pre-operative diagnostic tool was 85%. Computed tomography angiogram is simple, quick and can provide an accurate pre-operative renal vascular anatomy in terms of number and location of renal vessels, early dividing renal arteries and crossing vessels at UPJ. (author)

  20. Does filler database size influence identification accuracy?

    Science.gov (United States)

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Precision and accuracy of mechanistic-empirical pavement design

    CSIR Research Space (South Africa)

    Theyse, HL

    2006-09-01

    Full Text Available are discussed in general. The effects of variability and error on the design accuracy and design risk are lastly illustrated at the hand of a simple mechanistic-empirical design problem, showing that the engineering models alone determine the accuracy...

  2. ACCURACY ASSESSMENT OF COASTAL TOPOGRAPHY DERIVED FROM UAV IMAGES

    Directory of Open Access Journals (Sweden)

    N. Long

    2016-06-01

    Full Text Available To monitor coastal environments, Unmanned Aerial Vehicle (UAV is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR or Terrestrial Laser Scanning (TLS, this solution produces Digital Surface Model (DSM with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm, a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs and the influence of spatial image resolution (4.6 cm vs 2 cm. The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm. The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm; the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  3. Improving Accuracy of Processing by Adaptive Control Techniques

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available When machining the work-pieces a range of scatter of the work-piece dimensions to the tolerance limit is displaced in response to the errors. To improve an accuracy of machining and prevent products from defects it is necessary to diminish the machining error components, i.e. to improve the accuracy of machine tool, tool life, rigidity of the system, accuracy of adjustment. It is also necessary to provide on-machine adjustment after a certain time. However, increasing number of readjustments reduces the performance and high machine and tool requirements lead to a significant increase in the machining cost.To improve the accuracy and machining rate, various devices of active control (in-process gaging devices, as well as controlled machining through adaptive systems for a technological process control now become widely used. Thus, the accuracy improvement in this case is reached by compensation of a majority of technological errors. The sensors of active control can provide improving the accuracy of processing by one or two quality classes, and simultaneous operation of several machines.For efficient use of sensors of active control it is necessary to develop the accuracy control methods by means of introducing the appropriate adjustments to solve this problem. Methods based on the moving average, appear to be the most promising for accuracy control, since they contain information on the change in the last several measured values of the parameter under control.When using the proposed method in calculation, the first three members of the sequence of deviations remain unchanged, therefore 1 1 x  x , 2 2 x  x , 3 3 x  x Then, for each i-th member of the sequence we calculate that way: , ' i i i x  x  k x , where instead of the i x values will be populated with the corresponding values ' i x calculated as an average of three previous members:3 ' 1  2  3  i i i i x x x x .As a criterion for the estimate of the control

  4. An evaluation of safety-critical Java on a Java processor

    DEFF Research Database (Denmark)

    Rios Rivas, Juan Ricardo; Schoeberl, Martin

    2014-01-01

    The safety-critical Java (SCJ) specification provides a restricted set of the Java language intended for applications that require certification. In order to test the specification, implementations are emerging and the need to evaluate those implementations in a systematic way is becoming important....... In this paper we evaluate our SCJ implementation which is based on the Java Optimized Processor JOP and we measure different performance and timeliness criteria relevant to hard real-time systems. Our implementation targets Level 0 and Level1 of the specification and to test it we use a series of micro...

  5. FEATURES OF USING AUGMENTED REALITY TECHNOLOGY TO SUPPORT EDUCATIONAL PROCESSES

    Directory of Open Access Journals (Sweden)

    Yury A. Kravchenko

    2014-01-01

    Full Text Available The paper discusses the concept and technology of augmented reality, the rationale given the relevance and timeliness of its use to support educational processes. Paper is a survey and study of the possibility of using augmented reality technology in education. Architecture is proposed and constructed algorithms of the software system management QR-codes media objects. An overview of the features and uses of augmented reality technology to support educational processes is displayed, as an option of a new form of visual demonstration of complex objects, models and processes. 

  6. Accuracy of endoscopic ultrasonography for diagnosing ulcerative early gastric cancers

    Science.gov (United States)

    Park, Jin-Seok; Kim, Hyungkil; Bang, Byongwook; Kwon, Kyesook; Shin, Youngwoon

    2016-01-01

    Abstract Although endoscopic ultrasonography (EUS) is the first-choice imaging modality for predicting the invasion depth of early gastric cancer (EGC), the prediction accuracy of EUS is significantly decreased when EGC is combined with ulceration. The aim of present study was to compare the accuracy of EUS and conventional endoscopy (CE) for determining the depth of EGC. In addition, the various clinic-pathologic factors affecting the diagnostic accuracy of EUS, with a particular focus on endoscopic ulcer shapes, were evaluated. We retrospectively reviewed data from 236 consecutive patients with ulcerative EGC. All patients underwent EUS for estimating tumor invasion depth, followed by either curative surgery or endoscopic treatment. The diagnostic accuracy of EUS and CE was evaluated by comparing the final histologic result of resected specimen. The correlation between accuracy of EUS and characteristics of EGC (tumor size, histology, location in stomach, tumor invasion depth, and endoscopic ulcer shapes) was analyzed. Endoscopic ulcer shapes were classified into 3 groups: definite ulcer, superficial ulcer, and ill-defined ulcer. The overall accuracy of EUS and CE for predicting the invasion depth in ulcerative EGC was 68.6% and 55.5%, respectively. Of the 236 patients, 36 patients were classified as definite ulcers, 98 were superficial ulcers, and 102 were ill-defined ulcers, In univariate analysis, EUS accuracy was associated with invasion depth (P = 0.023), tumor size (P = 0.034), and endoscopic ulcer shapes (P = 0.001). In multivariate analysis, there is a significant association between superficial ulcer in CE and EUS accuracy (odds ratio: 2.977; 95% confidence interval: 1.255–7.064; P = 0.013). The accuracy of EUS for determining tumor invasion depth in ulcerative EGC was superior to that of CE. In addition, ulcer shape was an important factor that affected EUS accuracy. PMID:27472672

  7. Value Driven Outcomes (VDO): a pragmatic, modular, and extensible software framework for understanding and improving health care costs and outcomes

    Science.gov (United States)

    Kawamoto, Kensaku; Martin, Cary J; Williams, Kip; Tu, Ming-Chieh; Park, Charlton G; Hunter, Cheri; Staes, Catherine J; Bray, Bruce E; Deshmukh, Vikrant G; Holbrook, Reid A; Morris, Scott J; Fedderson, Matthew B; Sletta, Amy; Turnbull, James; Mulvihill, Sean J; Crabtree, Gordon L; Entwistle, David E; McKenna, Quinn L; Strong, Michael B; Pendleton, Robert C; Lee, Vivian S

    2015-01-01

    Objective To develop expeditiously a pragmatic, modular, and extensible software framework for understanding and improving healthcare value (costs relative to outcomes). Materials and methods In 2012, a multidisciplinary team was assembled by the leadership of the University of Utah Health Sciences Center and charged with rapidly developing a pragmatic and actionable analytics framework for understanding and enhancing healthcare value. Based on an analysis of relevant prior work, a value analytics framework known as Value Driven Outcomes (VDO) was developed using an agile methodology. Evaluation consisted of measurement against project objectives, including implementation timeliness, system performance, completeness, accuracy, extensibility, adoption, satisfaction, and the ability to support value improvement. Results A modular, extensible framework was developed to allocate clinical care costs to individual patient encounters. For example, labor costs in a hospital unit are allocated to patients based on the hours they spent in the unit; actual medication acquisition costs are allocated to patients based on utilization; and radiology costs are allocated based on the minutes required for study performance. Relevant process and outcome measures are also available. A visualization layer facilitates the identification of value improvement opportunities, such as high-volume, high-cost case types with high variability in costs across providers. Initial implementation was completed within 6 months, and all project objectives were fulfilled. The framework has been improved iteratively and is now a foundational tool for delivering high-value care. Conclusions The framework described can be expeditiously implemented to provide a pragmatic, modular, and extensible approach to understanding and improving healthcare value. PMID:25324556

  8. Accuracy optimization with wavelength tunability in overlay imaging technology

    Science.gov (United States)

    Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna

    2018-03-01

    As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.

  9. Testing an Automated Accuracy Assessment Method on Bibliographic Data

    Directory of Open Access Journals (Sweden)

    Marlies Olensky

    2014-12-01

    Full Text Available This study investigates automated data accuracy assessment as described in data quality literature for its suitability to assess bibliographic data. The data samples comprise the publications of two Nobel Prize winners in the field of Chemistry for a 10-year-publication period retrieved from the two bibliometric data sources, Web of Science and Scopus. The bibliographic records are assessed against the original publication (gold standard and an automatic assessment method is compared to a manual one. The results show that the manual assessment method reflects truer accuracy scores. The automated assessment method would need to be extended by additional rules that reflect specific characteristics of bibliographic data. Both data sources had higher accuracy scores per field than accumulated per record. This study contributes to the research on finding a standardized assessment method of bibliographic data accuracy as well as defining the impact of data accuracy on the citation matching process.

  10. Evidence for enhanced interoceptive accuracy in professional musicians

    Directory of Open Access Journals (Sweden)

    Katharina eSchirmer-Mokwa

    2015-12-01

    Full Text Available Interoception is defined as the perceptual activity involved in the processing of internal bodily signals. While the ability of internal perception is considered a relatively stable trait, recent data suggest that learning to integrate multisensory information can modulate it. Making music is a uniquely rich multisensory experience that has shown to alter motor, sensory, and multimodal representations in the brain of musicians. We hypothesize that musical training also heightens interoceptive accuracy comparable to other perceptual modalities. Thirteen professional singers, twelve string players, and thirteen matched non-musicians were examined using a well-established heartbeat discrimination paradigm complemented by self-reported dispositional traits. Results revealed that both groups of musicians displayed higher interoceptive accuracy than non-musicians, whereas no differences were found between singers and string-players. Regression analyses showed that accumulated musical practice explained about 49% variation in heartbeat perception accuracy in singers but not in string-players. Psychometric data yielded a number of psychologically plausible inter-correlations in musicians related to performance anxiety. However, dispositional traits were not a confounding factor on heartbeat discrimination accuracy. Together, these data provide first evidence indicating that professional musicians show enhanced interoceptive accuracy compared to non-musicians. We argue that musical training largely accounted for this effect.

  11. Reliability and accuracy of four dental shade-matching devices.

    Science.gov (United States)

    Kim-Pusateri, Seungyee; Brewer, Jane D; Davis, Elaine L; Wee, Alvin G

    2009-03-01

    There are several electronic shade-matching instruments available for clinical use, but the reliability and accuracy of these instruments have not been thoroughly investigated. The purpose of this in vitro study was to evaluate the reliability and accuracy of 4 dental shade-matching instruments in a standardized environment. Four shade-matching devices were tested: SpectroShade, ShadeVision, VITA Easyshade, and ShadeScan. Color measurements were made of 3 commercial shade guides (Vitapan Classical, Vitapan 3D-Master, and Chromascop). Shade tabs were placed in the middle of a gingival matrix (Shofu GUMY) with shade tabs of the same nominal shade from additional shade guides placed on both sides. Measurements were made of the central region of the shade tab positioned inside a black box. For the reliability assessment, each shade tab from each of the 3 shade guide types was measured 10 times. For the accuracy assessment, each shade tab from 10 guides of each of the 3 types evaluated was measured once. Differences in reliability and accuracy were evaluated using the Standard Normal z test (2 sided) (alpha=.05) with Bonferroni correction. Reliability of devices was as follows: ShadeVision, 99.0%; SpectroShade, 96.9%; VITA Easyshade, 96.4%; and ShadeScan, 87.4%. A significant difference in reliability was found between ShadeVision and ShadeScan (P=.008). All other comparisons showed similar reliability. Accuracy of devices was as follows: VITA Easyshade, 92.6%; ShadeVision, 84.8%; SpectroShade, 80.2%; and ShadeScan, 66.8%. Significant differences in accuracy were found between all device pairs (Preliability (over 96%), indicating predictable shade values from repeated measurements. However, there was more variability in accuracy among devices (67-93%), and differences in accuracy were seen with most device comparisons.

  12. Using inferred probabilities to measure the accuracy of imprecise forecasts

    Directory of Open Access Journals (Sweden)

    Paul Lehner

    2012-11-01

    Full Text Available Research on forecasting is effectively limited to forecasts that are expressed with clarity; which is to say that the forecasted event must be sufficiently well-defined so that it can be clearly resolved whether or not the event occurred and forecasts certainties are expressed as quantitative probabilities. When forecasts are expressed with clarity, then quantitative measures (scoring rules, calibration, discrimination, etc. can be used to measure forecast accuracy, which in turn can be used to measure the comparative accuracy of different forecasting methods. Unfortunately most real world forecasts are not expressed clearly. This lack of clarity extends to both the description of the forecast event and to the use of vague language to express forecast certainty. It is thus difficult to assess the accuracy of most real world forecasts, and consequently the accuracy the methods used to generate real world forecasts. This paper addresses this deficiency by presenting an approach to measuring the accuracy of imprecise real world forecasts using the same quantitative metrics routinely used to measure the accuracy of well-defined forecasts. To demonstrate applicability, the Inferred Probability Method is applied to measure the accuracy of forecasts in fourteen documents examining complex political domains. Key words: inferred probability, imputed probability, judgment-based forecasting, forecast accuracy, imprecise forecasts, political forecasting, verbal probability, probability calibration.

  13. Design and Implementation of a Prototype Microcomputer Database Management System for the Standardization of Data Elements for the Department of Defense

    Science.gov (United States)

    1990-09-01

    Justification Cat: Left Timeliness Identifier: Qwe (FI] Domain I Def Text: -Press [F3 to nove in/out of the fields below. Use ARROW keys to scroll- Rec: Host...Delete a record Elemont Creator ID: Justification Cat: Left Timeliness Identifier: Qwe [FI) Domain Def Text: - Press (F3) to move in/out of the...Number: 1 Alias Name: Accounting Code Data Value Type ID: QL Max Length Characters: 34 Timeliness ID: Qwe Justification Category: Left Creator ID: Domain

  14. Accuracy and precision of 3 intraoral scanners and accuracy of conventional impressions: A novel in vivo analysis method.

    Science.gov (United States)

    Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A

    2018-02-01

    To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Understanding the delayed-keyword effect on metacomprehension accuracy.

    Science.gov (United States)

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.

  16. The Influence of Motor Skills on Measurement Accuracy

    Science.gov (United States)

    Brychta, Petr; Sadílek, Marek; Brychta, Josef

    2016-10-01

    This innovative study trying to do interdisciplinary interface at first view different ways fields: kinantropology and mechanical engineering. A motor skill is described as an action which involves the movement of muscles in a body. Gross motor skills permit functions as a running, jumping, walking, punching, lifting and throwing a ball, maintaining a body balance, coordinating etc. Fine motor skills captures smaller neuromuscular actions, such as holding an object between the thumb and a finger. In mechanical inspection, the accuracy of measurement is most important aspect. The accuracy of measurement to some extent is also dependent upon the sense of sight or sense of touch associated with fine motor skills. It is therefore clear that the level of motor skills will affect the precision and accuracy of measurement in metrology. Aim of this study is literature review to find out fine motor skills level of individuals and determine the potential effect of different fine motor skill performance on precision and accuracy of mechanical engineering measuring.

  17. Matters of Accuracy and Conventionality: Prior Accuracy Guides Children's Evaluations of Others' Actions

    Science.gov (United States)

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-01-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…

  18. Climatic associations of British species distributions show good transferability in time but low predictive accuracy for range change.

    Directory of Open Access Journals (Sweden)

    Giovanni Rapacciuolo

    Full Text Available Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time--due to their accuracy to predict large areas retained by species--but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records

  19. Speed-Accuracy Tradeoff in Olfaction

    National Research Council Canada - National Science Library

    Rinberg, Dmitry; Koulakov, ALexel; Gelperin, Alan

    2006-01-01

    The basic psychophysical principle of speed-accuracy tradeoff (SAT) has been used to understand key aspects of neuronal information processing in vision and audition, but the principle of SAT is still debated in olfaction...

  20. Final Technical Report: Increasing Prediction Accuracy.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  1. Photon caliper to achieve submillimeter positioning accuracy

    Science.gov (United States)

    Gallagher, Kyle J.; Wong, Jennifer; Zhang, Junan

    2017-09-01

    The purpose of this study was to demonstrate the feasibility of using a commercial two-dimensional (2D) detector array with an inherent detector spacing of 5 mm to achieve submillimeter accuracy in localizing the radiation isocenter. This was accomplished by delivering the Vernier ‘dose’ caliper to a 2D detector array where the nominal scale was the 2D detector array and the non-nominal Vernier scale was the radiation dose strips produced by the high-definition (HD) multileaf collimators (MLCs) of the linear accelerator. Because the HD MLC sequence was similar to the picket fence test, we called this procedure the Vernier picket fence (VPF) test. We confirmed the accuracy of the VPF test by offsetting the HD MLC bank by known increments and comparing the known offset with the VPF test result. The VPF test was able to determine the known offset within 0.02 mm. We also cross-validated the accuracy of the VPF test in an evaluation of couch hysteresis. This was done by using both the VPF test and the ExacTrac optical tracking system to evaluate the couch position. We showed that the VPF test was in agreement with the ExacTrac optical tracking system within a root-mean-square value of 0.07 mm for both the lateral and longitudinal directions. In conclusion, we demonstrated the VPF test can determine the offset between a 2D detector array and the radiation isocenter with submillimeter accuracy. Until now, no method to locate the radiation isocenter using a 2D detector array has been able to achieve such accuracy.

  2. Illusory expectations can affect retrieval-monitoring accuracy.

    Science.gov (United States)

    McDonough, Ian M; Gallo, David A

    2012-03-01

    The present study investigated how expectations, even when illusory, can affect the accuracy of memory decisions. Participants studied words presented in large or small font for subsequent memory tests. Replicating prior work, judgments of learning indicated that participants expected to remember large words better than small words, even though memory for these words was equivalent on a standard test of recognition memory and subjective judgments. Critically, we also included tests that instructed participants to selectively search memory for either large or small words, thereby allowing different memorial expectations to contribute to performance. On these tests we found reduced false recognition when searching memory for large words relative to small words, such that the size illusion paradoxically affected accuracy measures (d' scores) in the absence of actual memory differences. Additional evidence for the role of illusory expectations was that (a) the accuracy effect was obtained only when participants searched memory for the aspect of the stimuli corresponding to illusory expectations (size instead of color) and (b) the accuracy effect was eliminated on a forced-choice test that prevented the influence of memorial expectations. These findings demonstrate the critical role of memorial expectations in the retrieval-monitoring process. 2012 APA, all rights reserved

  3. Accuracy of Digital vs. Conventional Implant Impressions

    Science.gov (United States)

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  4. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  5. Impact of the frequency of online verifications on the patient set-up accuracy and set-up margins

    International Nuclear Information System (INIS)

    Rudat, Volker; Hammoud, Mohamed; Pillay, Yogin; Alaradi, Abdul Aziz; Mohamed, Adel; Altuwaijri, Saleh

    2011-01-01

    The purpose of the study was to evaluate the patient set-up error of different anatomical sites, to estimate the effect of different frequencies of online verifications on the patient set-up accuracy, and to calculate margins to accommodate for the patient set-up error (ICRU set-up margin, SM). Alignment data of 148 patients treated with inversed planned intensity modulated radiotherapy (IMRT) or three-dimensional conformal radiotherapy (3D-CRT) of the head and neck (n = 31), chest (n = 72), abdomen (n = 15), and pelvis (n = 30) were evaluated. The patient set-up accuracy was assessed using orthogonal megavoltage electronic portal images of 2328 fractions of 173 planning target volumes (PTV). In 25 patients, two PTVs were analyzed where the PTVs were located in different anatomical sites and treated in two different radiotherapy courses. The patient set-up error and the corresponding SM were retrospectively determined assuming no online verification, online verification once a week and online verification every other day. The SM could be effectively reduced with increasing frequency of online verifications. However, a significant frequency of relevant set-up errors remained even after online verification every other day. For example, residual set-up errors larger than 5 mm were observed on average in 18% to 27% of all fractions of patients treated in the chest, abdomen and pelvis, and in 10% of fractions of patients treated in the head and neck after online verification every other day. In patients where high set-up accuracy is desired, daily online verification is highly recommended

  6. Accuracy of clinical diagnosis versus the World Health Organization case definition in the Amoy Garden SARS cohort.

    Science.gov (United States)

    Wong, W N; Sek, Antonio C H; Lau, Rick F L; Li, K M; Leung, Joe K S; Tse, M L; Ng, Andy H W; Stenstrom, Robert

    2003-11-01

    To compare the diagnostic accuracy of emergency department (ED) physicians with the World Health Organization (WHO) case definition in a large community-based SARS (severe acute respiratory syndrome) cohort. This was a cohort study of all patients from Hong Kong's Amoy Garden complex who presented to an ED SARS screening clinic during a 2-month outbreak. Clinical findings and WHO case definition criteria were recorded, along with ED diagnoses. Final diagnoses were established independently based on relevant diagnostic tests performed after the ED visit. Emergency physician diagnostic accuracy was compared with that of the WHO SARS case definition. Sensitivity, specificity, predictive values and likelihood ratios were calculated using standard formulae. During the study period, 818 patients presented with SARS-like symptoms, including 205 confirmed SARS, 35 undetermined SARS and 578 non-SARS. Sensitivity, specificity and accuracy were 91%, 96% and 94% for ED clinical diagnosis, versus 42%, 86% and 75% for the WHO case definition. Positive likelihood ratios (LR+) were 21.1 for physician judgement and 3.1 for the WHO criteria. Negative likelihood ratios (LR-) were 0.10 for physician judgement and 0.67 for the WHO criteria, indicating that clinician judgement was a much more powerful predictor than the WHO criteria. Physician clinical judgement was more accurate than the WHO case definition. Reliance on the WHO case definition as a SARS screening tool may lead to an unacceptable rate of misdiagnosis. The SARS case definition must be revised if it is to be used as a screening tool in emergency departments and primary care settings.

  7. The confidence-accuracy relationship for eyewitness identification decisions: Effects of exposure duration, retention interval, and divided attention.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-03-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.

  8. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    Science.gov (United States)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  9. The Development of Relevance in Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mu-hsuan Huang

    1997-12-01

    Full Text Available This article attempts to investigate the notion of relevance in information retrieval. It discusses various definitions for relevance from historical viewpoints and the characteristics of relevance judgments. Also, it introduces empirical results of important related researches.[Article content in Chinese

  10. Aspect-based Relevance Learning for Image Retrieval

    NARCIS (Netherlands)

    M.J. Huiskes (Mark)

    2005-01-01

    htmlabstractWe analyze the special structure of the relevance feedback learning problem, focusing particularly on the effects of image selection by partial relevance on the clustering behavior of feedback examples. We propose a scheme, aspect-based relevance learning, which guarantees that feedback

  11. Analyzing thematic maps and mapping for accuracy

    Science.gov (United States)

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  12. Implementation and results of an integrated data quality assurance protocol in a randomized controlled trial in Uttar Pradesh, India.

    Science.gov (United States)

    Gass, Jonathon D; Misra, Anamika; Yadav, Mahendra Nath Singh; Sana, Fatima; Singh, Chetna; Mankar, Anup; Neal, Brandon J; Fisher-Bowman, Jennifer; Maisonneuve, Jenny; Delaney, Megan Marx; Kumar, Krishan; Singh, Vinay Pratap; Sharma, Narender; Gawande, Atul; Semrau, Katherine; Hirschhorn, Lisa R

    2017-09-07

    There are few published standards or methodological guidelines for integrating Data Quality Assurance (DQA) protocols into large-scale health systems research trials, especially in resource-limited settings. The BetterBirth Trial is a matched-pair, cluster-randomized controlled trial (RCT) of the BetterBirth Program, which seeks to improve quality of facility-based deliveries and reduce 7-day maternal and neonatal mortality and maternal morbidity in Uttar Pradesh, India. In the trial, over 6300 deliveries were observed and over 153,000 mother-baby pairs across 120 study sites were followed to assess health outcomes. We designed and implemented a robust and integrated DQA system to sustain high-quality data throughout the trial. We designed the Data Quality Monitoring and Improvement System (DQMIS) to reinforce six dimensions of data quality: accuracy, reliability, timeliness, completeness, precision, and integrity. The DQMIS was comprised of five functional components: 1) a monitoring and evaluation team to support the system; 2) a DQA protocol, including data collection audits and targets, rapid data feedback, and supportive supervision; 3) training; 4) standard operating procedures for data collection; and 5) an electronic data collection and reporting system. Routine audits by supervisors included double data entry, simultaneous delivery observations, and review of recorded calls to patients. Data feedback reports identified errors automatically, facilitating supportive supervision through a continuous quality improvement model. The five functional components of the DQMIS successfully reinforced data reliability, timeliness, completeness, precision, and integrity. The DQMIS also resulted in 98.33% accuracy across all data collection activities in the trial. All data collection activities demonstrated improvement in accuracy throughout implementation. Data collectors demonstrated a statistically significant (p = 0.0004) increase in accuracy throughout

  13. A feedback-retransmission based asynchronous frequency hopping MAC protocol for military aeronautical ad hoc networks

    Directory of Open Access Journals (Sweden)

    Jinhui TANG

    2018-05-01

    Full Text Available Attacking time-sensitive targets has rigid demands for the timeliness and reliability of information transmission, while typical Media Access Control (MAC designed for this application works well only in very light-load scenarios; as a consequence, the performances of system throughput and channel utilization are degraded. For this problem, a feedback-retransmission based asynchronous FRequency hopping Media Access (FRMA control protocol is proposed. Burst communication, asynchronous Frequency Hopping (FH, channel coding, and feedback retransmission are utilized in FRMA. With the mechanism of asynchronous FH, immediate packet transmission and multi-packet reception can be realized, and thus the timeliness is improved. Furthermore, reliability can be achieved via channel coding and feedback retransmission. With theories of queuing theory, Markov model, packets collision model, and discrete Laplace transformation, the formulas of packet success probability, system throughput, average packet end-to-end delay, and delay distribution are obtained. The approximation accuracy of theoretical derivation is verified by experimental results. Within a light-load network, the proposed FRMA has the ability of millisecond delay and 99% reliability as well as outperforms the non-feedback-retransmission based asynchronous frequency hopping media access control protocol. Keywords: Ad hoc networks, Aeronautical communications, Frequency hopping, Media Access Control (MAC, Time-sensitive

  14. Factors affecting GEBV accuracy with single-step Bayesian models.

    Science.gov (United States)

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  15. English Verb Accuracy of Bilingual Cantonese-English Preschoolers

    Science.gov (United States)

    Rezzonico, Stefano; Goldberg, Ahuva; Milburn, Trelani; Belletti, Adriana; Girolametto, Luigi

    2017-01-01

    Purpose: Knowledge of verb development in typically developing bilingual preschoolers may inform clinicians about verb accuracy rates during the 1st 2 years of English instruction. This study aimed to investigate tensed verb accuracy in 2 assessment contexts in 4- and 5-year-old Cantonese-English bilingual preschoolers. Method: The sample included…

  16. EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal

    Science.gov (United States)

    Chen, Yong; Wu, Chun-ting; Liu, Huan-lin

    2017-07-01

    Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.

  17. Coordinate metrology accuracy of systems and measurements

    CERN Document Server

    Sładek, Jerzy A

    2016-01-01

    This book focuses on effective methods for assessing the accuracy of both coordinate measuring systems and coordinate measurements. It mainly reports on original research work conducted by Sladek’s team at Cracow University of Technology’s Laboratory of Coordinate Metrology. The book describes the implementation of different methods, including artificial neural networks, the Matrix Method, the Monte Carlo method and the virtual CMM (Coordinate Measuring Machine), and demonstrates how these methods can be effectively used in practice to gauge the accuracy of coordinate measurements. Moreover, the book includes an introduction to the theory of measurement uncertainty and to key techniques for assessing measurement accuracy. All methods and tools are presented in detail, using suitable mathematical formulations and illustrated with numerous examples. The book fills an important gap in the literature, providing readers with an advanced text on a topic that has been rapidly developing in recent years. The book...

  18. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    Directory of Open Access Journals (Sweden)

    Gabere MN

    2016-06-01

    Full Text Available Musa Nur Gabere,1 Mohamed Aly Hussein,1 Mohammad Azhar Aziz2 1Department of Bioinformatics, King Abdullah International Medical Research Center/King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; 2Colorectal Cancer Research Program, Department of Medical Genomics, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Purpose: There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC. The selection of important features is a crucial step before training a classifier.Methods: In this study, we built a model that uses support vector machine (SVM to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid.Results: The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF, Bayes net (BN, multilayer perceptron (MLP, naïve Bayes (NB, reduced error pruning tree (REPT, and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP. Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1

  19. Effects of an Automated Maintenance Management System on organizational communication

    International Nuclear Information System (INIS)

    Bauman, M.B.; VanCott, H.P.

    1988-01-01

    The primary purpose of the project was to evaluate the effectiveness of two techniques for improving organizational communication: (1) an Automated Maintenance Management System (AMMS) and (2) Interdepartmental Coordination Meetings. Additional objectives concerned the preparation of functional requirements for an AMMS, and training modules to improve group communication skills. Four nuclear power plants participated in the evaluation. Two plants installed AMMSs, one plant instituted interdepartmental job coordination meetings, and the fourth plant served as a control for the evaluation. Questionnaires and interviews were used to collect evaluative data. The evaluation focused on five communication or information criteria: timeliness, redundancy, withholding or gatekeeping, feedback, and accuracy/amount

  20. The Impact of the Reliability of Teleinformation Systems on the Quality of Transmitted Information

    Directory of Open Access Journals (Sweden)

    Stawowy Marek

    2016-10-01

    Full Text Available The work describes the impact the reliability of the information quality IQ for information and communication systems. One of the components of IQ is the reliability properties such as relativity, accuracy, timeliness, completeness, consistency, adequacy, accessibility, credibility, congruence. Each of these components of IQ is independent and to properly estimate the value of IQ, use one of the methods of modeling uncertainty. In this article, we used a hybrid method that has been developed jointly by one of the authors. This method is based on the mathematical theory of evidence know as Dempstera-Shafera (DS theory and serial links of dependent hybrid named IQ (hyb.

  1. ACCURACY ANALYSIS OF KINECT DEPTH DATA

    Directory of Open Access Journals (Sweden)

    K. Khoshelham

    2012-09-01

    Full Text Available This paper presents an investigation of the geometric quality of depth data obtained by the Kinect sensor. Based on the mathematical model of depth measurement by the sensor a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimetres up to about 4 cm at the maximum range of the sensor. The accuracy of the data is also found to be influenced by the low resolution of the depth measurements.

  2. Revolutionizing radiographic diagnostic accuracy in periodontics

    Directory of Open Access Journals (Sweden)

    Brijesh Sharma

    2016-01-01

    Full Text Available Effective diagnostic accuracy has in some way been the missing link between periodontal diagnosis and treatment. Most of the clinicians rely on the conventional two-dimensional (2D radiographs. But being a 2D image, it has its own limitations. 2D images at times can give an incomplete picture about the severity or type of disease and can further affect the treatment plan. Cone beam computed tomography (CBCT has a better potential for detecting periodontal bone defects with accuracy. The purpose here is to describe how CBCT imaging is beneficial in accurate diagnosis and will lead to a precise treatment plan.

  3. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  4. Safeguards Accountability Network accountability and materials management

    International Nuclear Information System (INIS)

    Carnival, G.J.; Meredith, E.M.

    1985-01-01

    The Safeguards Accountability Network (SAN) is a computerized on-line accountability system for the safeguards accountability control of nuclear materials inventories at Rocky Flats Plant. SAN is a dedicated accountability system utilizing source documents filled out on the shop floor as its base. The system incorporates double entry accounting and is developed around the Material Balance Area (MBA) concept. MBA custodians enter transaction information from source documents prepared by personnel in the process areas directly into the SAN system. This provides a somewhat near-real time perpetual inventory system which has limited interaction with MBA custodians. MBA custodians are permitted to inquire into the system and status items on inventory. They are also responsible for the accuracy of the accountability information used as input to the system for their MBA. Monthly audits by the Nuclear Materials Control group assure the timeliness and accuracy of SAN accountability information

  5. 31 CFR 10.22 - Diligence as to accuracy.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Diligence as to accuracy. 10.22... § 10.22 Diligence as to accuracy. (a) In general. A practitioner must exercise due diligence— (1) In... provided in §§ 10.34, 10.35, and 10.37, a practitioner will be presumed to have exercised due diligence for...

  6. [Clinical research IV. Relevancy of the statistical test chosen].

    Science.gov (United States)

    Talavera, Juan O; Rivas-Ruiz, Rodolfo

    2011-01-01

    When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).

  7. Indigenous Past Climate Knowledge as Cultural Built-in Object and Its Accuracy

    Directory of Open Access Journals (Sweden)

    Christian Leclerc

    2013-12-01

    Full Text Available In studying indigenous climate knowledge, two approaches can be envisioned. In the first, traditional knowledge is a cultural built-in object; conceived as a whole, its relevance can be assessed by referring to other cultural, economic, or technical components at work within an indigenous society. In the second, the accuracy of indigenous climate knowledge is assessed with western science knowledge used as an external reference. However, assessing the accuracy of indigenous climate knowledge remains a largely untapped area. We aim to show how accurate the culturally built indigenous climate knowledge of extreme climatic events is, and how amenable it is to fuzzy logic. A retrospective survey was carried out individually and randomly among 195 Eastern African farmers on climatic reasons for loss of on-farm crop diversity from 1961 to 2006. More than 3000 crop loss events were recorded, and reasons given by farmers were mainly related to droughts or heavy rainfall. Chi-square statistics computed by Monte Carlo simulations based on 999 replicates clearly rejected independence between indigenous knowledge of drought and heavy rainfall that occurred in the past and rainfall records. The fuzzy logic nature of indigenous climatic knowledge appears in the clear association of drought or heavy rainfall events, as perceived by farmers, with corresponding extreme rainfall values, contrasting with a fuzzy picture in the intermediate climatic situations. We discuss how the cultural built-in knowledge helps farmers in perceiving and remembering past climate variations, considering the specificity of the contexts where extreme climatic events were experienced. The integration of indigenous and scientific climate knowledge could allow development of drought monitoring that considers both climatic and contextual data.

  8. Final Report on the Audit of the Administration of the Contract Closeout Process at the Defense Contract Management Region, Dallas

    National Research Council Canada - National Science Library

    1990-01-01

    .... The audit was made from January to October 1989. The objectives of the audit were to determine the timeliness of the contract closeout process, the validity of unliquidated obligations on contracts awaiting closeout, and the timeliness...

  9. Astrophysical relevance of γ transition energies

    International Nuclear Information System (INIS)

    Rauscher, Thomas

    2008-01-01

    The relevant γ energy range is explicitly identified where additional γ strength must be located to have an impact on astrophysically relevant reactions. It is shown that folding the energy dependences of the transmission coefficients and the level density leads to maximal contributions for γ energies of 2≤E γ ≤4 unless quantum selection rules allow isolated states to contribute. Under this condition, electric dipole transitions dominate. These findings allow us to more accurately judge the relevance of modifications of the γ strength for astrophysics

  10. The Personal Relevance of the Social Studies.

    Science.gov (United States)

    VanSickle, Ronald L.

    1990-01-01

    Conceptualizes a personal-relevance framework derived from Ronald L. VanSickle's five areas of life integrated with four general motivating goals from Abraham Maslow's hierarchy of needs and Richard and Patricia Schmuck's social motivation theory. Illustrates ways to apply the personal relevance framework to make social studies more relevant to…

  11. Accuracy of a wireless localization system for radiotherapy

    International Nuclear Information System (INIS)

    Balter, James M.; Wright, J. Nelson; Newell, Laurence J.; Friemel, Barry; Dimmer, Steven; Cheng, Yuki; Wong, John; Vertatschitsch, Edward; Mate, Timothy P.

    2005-01-01

    Purpose: A system has been developed for patient positioning based on real-time localization of implanted electromagnetic transponders (beacons). This study demonstrated the accuracy of the system before clinical trials. Methods and materials: We describe the overall system. The localization component consists of beacons and a source array. A rigid phantom was constructed to place the beacons at known offsets from a localization array. Tests were performed at distances of 80 and 270 mm from the array and at positions in the array plane of up to 8 cm offset. Tests were performed in air and saline to assess the effect of tissue conductivity and with multiple transponders to evaluate crosstalk. Tracking was tested using a dynamic phantom creating a circular path at varying speeds. Results: Submillimeter accuracy was maintained throughout all experiments. Precision was greater proximal to the source plane (σx = 0.006 mm, σy = 0.01 mm, σz = 0.006 mm), but continued to be submillimeter at the end of the designed tracking range at 270 mm from the array (σx = 0.27 mm, σy = 0.36 mm, σz = 0.48 mm). The introduction of saline and the use of multiple beacons did not affect accuracy. Submillimeter accuracy was maintained using the dynamic phantom at speeds of up to 3 cm/s. Conclusion: This system has demonstrated the accuracy needed for localization and monitoring of position during treatment

  12. Compact Intraoperative MRI: Stereotactic Accuracy and Future Directions.

    Science.gov (United States)

    Markowitz, Daniel; Lin, Dishen; Salas, Sussan; Kohn, Nina; Schulder, Michael

    2017-01-01

    Intraoperative imaging must supply data that can be used for accurate stereotactic navigation. This information should be at least as accurate as that acquired from diagnostic imagers. The aim of this study was to compare the stereotactic accuracy of an updated compact intraoperative MRI (iMRI) device based on a 0.15-T magnet to standard surgical navigation on a 1.5-T diagnostic scan MRI and to navigation with an earlier model of the same system. The accuracy of each system was assessed using a water-filled phantom model of the brain. Data collected with the new system were compared to those obtained in a previous study assessing the older system. The accuracy of the new iMRI was measured against standard surgical navigation on a 1.5-T MRI using T1-weighted (W) images. The mean error with the iMRI using T1W images was lower than that based on images from the 1.5-T scan (1.24 vs. 2.43 mm). T2W images from the newer iMRI yielded a lower navigation error than those acquired with the prior model (1.28 vs. 3.15 mm). Improvements in magnet design can yield progressive increases in accuracy, validating the concept of compact, low-field iMRI. Avoiding the need for registration between image and surgical space increases navigation accuracy. © 2017 S. Karger AG, Basel.

  13. Speed and accuracy of visual image discrimination by rats

    Directory of Open Access Journals (Sweden)

    Pamela eReinagel

    2013-12-01

    Full Text Available The trade-off between speed and accuracy of sensory discrimination has most often been studying using sensory stimuli that evolve over time, such as random dot motion discrimination tasks. We previously reported that when rats perform motion discrimination, correct trials have longer reaction times than errors, accuracy increases with reaction time, and reaction time increases with stimulus ambiguity. In such experiments, new sensory information is continually presented, which could partly explain interactions between reaction time and accuracy. The present study shows that a changing physical stimulus is not essential to those findings. Freely behaving rats were trained to discriminate between two static visual images in a self-paced, 2-alternative forced-choice (2AFC reaction time task. Each trial was initiated by the rat, and the two images were presented simultaneously and persisted until the rat responded, with no time limit. Reaction times were longer in correct trials than in error trials, and accuracy increased with reaction time, comparable to results previously reported for rats performing motion discrimination. In the motion task, coherence has been used to vary discrimination difficulty. Here morphs between the previously learned images were used to parametrically vary the image similarity. In randomly interleaved trials, rats took more time on average to respond in trials in which they had to discriminate more similar stimuli. For both the motion and image tasks, the dependence of reaction time on ambiguity is weak, as if rats prioritized speed over accuracy. Therefore we asked whether rats can change the priority of speed and accuracy adaptively in response to a change in reward contingencies. For two rats, the penalty delay was increased from two to six seconds. When the penalty was longer, reaction times increased, and accuracy improved. This demonstrates that rats can flexibly adjust their behavioral strategy in response to the

  14. Sensitivity and accuracy of atomic absorption spectrophotometry for trace elements in marine biological samples

    International Nuclear Information System (INIS)

    Fukai, R.; Oregioni, B.

    1976-01-01

    During the course of 1974-75 atomic absorption spectrophotometry (AAS) has been used extensively in our laboratory for measuring various trace elements in marine biological materials in order to conduct homogeneity tests on the intercalibration samples for trace metal analysis as well as to obtain baseline data for trace elements in various kinds of marine organisms collected from different locations in the Mediterranean Sea. Several series of test experiments have been conducted on the current methodology in use in our laboratory to ensure satisfactory analytical performance in measuring a number of trace elements for which analytical problems have not completely been solved. Sensitivities of the techniques used were repeatedly checked for various elements and the accuracy of the analyses were always critically evaluated by analyzing standard reference materials. The results of these test experiments have uncovered critical points relevant to the application of the AAS to routine analysis

  15. ACCURACY ANALYSIS OF A LOW-COST PLATFORM FOR POSITIONING AND NAVIGATION

    Directory of Open Access Journals (Sweden)

    S. Hofmann

    2012-07-01

    Full Text Available This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner’s characteristics.

  16. Relevance: An Interdisciplinary and Information Science Perspective

    Directory of Open Access Journals (Sweden)

    Howard Greisdorf

    2000-01-01

    Full Text Available Although relevance has represented a key concept in the field of information science for evaluating information retrieval effectiveness, the broader context established by interdisciplinary frameworks could provide greater depth and breadth to on-going research in the field. This work provides an overview of the nature of relevance in the field of information science with a cursory view of how cross-disciplinary approaches to relevance could represent avenues for further investigation into the evaluative characteristics of relevance as a means for enhanced understanding of human information behavior.

  17. Indicators of Accuracy of Consumer Health Information on the Internet

    Science.gov (United States)

    Fallis, Don; Frické, Martin

    2002-01-01

    Objectives: To identify indicators of accuracy for consumer health information on the Internet. The results will help lay people distinguish accurate from inaccurate health information on the Internet. Design: Several popular search engines (Yahoo, AltaVista, and Google) were used to find Web pages on the treatment of fever in children. The accuracy and completeness of these Web pages was determined by comparing their content with that of an instrument developed from authoritative sources on treating fever in children. The presence on these Web pages of a number of proposed indicators of accuracy, taken from published guidelines for evaluating the quality of health information on the Internet, was noted. Main Outcome Measures: Correlation between the accuracy of Web pages on treating fever in children and the presence of proposed indicators of accuracy on these pages. Likelihood ratios for the presence (and absence) of these proposed indicators. Results: One hundred Web pages were identified and characterized as “more accurate” or “less accurate.” Three indicators correlated with accuracy: displaying the HONcode logo, having an organization domain, and displaying a copyright. Many proposed indicators taken from published guidelines did not correlate with accuracy (e.g., the author being identified and the author having medical credentials) or inaccuracy (e.g., lack of currency and advertising). Conclusions: This method provides a systematic way of identifying indicators that are correlated with the accuracy (or inaccuracy) of health information on the Internet. Three such indicators have been identified in this study. Identifying such indicators and informing the providers and consumers of health information about them would be valuable for public health care. PMID:11751805

  18. On the Accuracy of Language Trees

    Science.gov (United States)

    Pompei, Simone; Loreto, Vittorio; Tria, Francesca

    2011-01-01

    Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic) features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve it. PMID:21674034

  19. On the accuracy of language trees.

    Directory of Open Access Journals (Sweden)

    Simone Pompei

    Full Text Available Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve

  20. Potential of accuracy profile for method validation in inductively coupled plasma spectrochemistry

    International Nuclear Information System (INIS)

    Mermet, J.M.; Granier, G.

    2012-01-01

    Method validation is usually performed over a range of concentrations for which analytical criteria must be verified. One important criterion in quantitative analysis is accuracy, i.e. the contribution of both trueness and precision. The study of accuracy over this range is called an accuracy profile and provides experimental tolerance intervals. Comparison with acceptability limits fixed by the end user defines a validity domain. This work describes the computation involved in the building of the tolerance intervals, particularly for the intermediate precision with within-laboratory experiments and for the reproducibility with interlaboratory studies. Computation is based on ISO 5725‐4 and on previously published work. Moreover, the bias uncertainty is also computed to verify the bias contribution to accuracy. The various types of accuracy profile behavior are exemplified with results obtained by using ICP-MS and ICP-AES. This procedure allows the analyst to define unambiguously a validity domain for a given accuracy. However, because the experiments are time-consuming, the accuracy profile method is mainly dedicated to method validation. - Highlights: ► An analytical method is defined by its accuracy, i.e. both trueness and precision. ► The accuracy as a function of an analyte concentration is an accuracy profile. ► Profile basic concepts are explained for trueness and intermediate precision. ► Profile-based tolerance intervals have to be compared with acceptability limits. ► Typical accuracy profiles are given for both ICP-AES and ICP-MS techniques.

  1. Diagnostic Accuracy of Imaging Modalities and Injection Techniques for the Diagnosis of Femoroacetabular Impingement/Labral Tear

    DEFF Research Database (Denmark)

    Reiman, Michael P.; Thorborg, Kristian; Goode, Adam P.

    2017-01-01

    Background: Diagnosing femoroacetabular impingement/acetabular labral tear (FAI/ALT) and subsequently making a decision regarding surgery are based primarily on diagnostic imaging and intra-articular hip joint injection techniques of unknown accuracy. Purpose: Summarize and evaluate the diagnostic...... probability of disease was demonstrated. Positive imaging findings increased the probability that a labral tear existed by a minimal to small degree with the use of magnetic resonance imaging/magnetic resonance angiogram (MRI/MRA) and ultrasound (US) and by a moderate degree for CTA. Negative imaging findings...... decreased the probability that a labral tear existed by a minimal degree with the use of MRI and US, a small to moderate degree with MRA, and a moderate degree with CTA. Clinical Relevance: Although findings of the included studies suggested potentially favorable use of these modalities for the diagnosis...

  2. Effects of cognitive training on change in accuracy in inductive reasoning ability.

    Science.gov (United States)

    Boron, Julie Blaskewicz; Turiano, Nicholas A; Willis, Sherry L; Schaie, K Warner

    2007-05-01

    We investigated cognitive training effects on accuracy and number of items attempted in inductive reasoning performance in a sample of 335 older participants (M = 72.78 years) from the Seattle Longitudinal Study. We assessed the impact of individual characteristics, including chronic disease. The reasoning training group showed significantly greater gain in accuracy and number of attempted items than did the comparison group; gain was primarily due to enhanced accuracy. Reasoning training effects involved a complex interaction of gender, prior cognitive status, and chronic disease. Women with prior decline on reasoning but no heart disease showed the greatest accuracy increase. In addition, stable reasoning-trained women with heart disease demonstrated significant accuracy gain. Comorbidity was associated with less change in accuracy. The results support the effectiveness of cognitive training on improving the accuracy of reasoning performance.

  3. Image Positioning Accuracy Analysis for Super Low Altitude Remote Sensing Satellites

    Directory of Open Access Journals (Sweden)

    Ming Xu

    2012-10-01

    Full Text Available Super low altitude remote sensing satellites maintain lower flight altitudes by means of ion propulsion in order to improve image resolution and positioning accuracy. The use of engineering data in design for achieving image positioning accuracy is discussed in this paper based on the principles of the photogrammetry theory. The exact line-of-sight rebuilding of each detection element and this direction precisely intersecting with the Earth's elliptical when the camera on the satellite is imaging are both ensured by the combined design of key parameters. These parameters include: orbit determination accuracy, attitude determination accuracy, camera exposure time, accurately synchronizing the reception of ephemeris with attitude data, geometric calibration and precise orbit verification. Precise simulation calculations show that image positioning accuracy of super low altitude remote sensing satellites is not obviously improved. The attitude determination error of a satellite still restricts its positioning accuracy.

  4. Classification Accuracy Is Not Enough

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A recent review of the research literature evaluating music genre recognition (MGR) systems over the past two decades shows that most works (81\\%) measure the capacity of a system to recognize genre by its classification accuracy. We show here, by implementing and testing three categorically...

  5. Coorientational Accuracy and Differentiation in the Management of Conflict.

    Science.gov (United States)

    Papa, Michael J.; Pood, Elliott A.

    1988-01-01

    Investigates the relationship between coorientational accuracy and differentiation time and two dimensions of conflict (interaction satisfaction and assertiveness of influence strategies). Suggests that entering a conflict with high coorientational accuracy leads to less differentiation and fewer assertive strategies during the confrontation and…

  6. Quantitative coronary CT angiography: absolute lumen sizing rather than %stenosis predicts hemodynamically relevant stenosis

    Energy Technology Data Exchange (ETDEWEB)

    Plank, Fabian [Innsbruck Medical University, Department of Radiology, Innsbruck (Austria); Innsbruck Medical University, Department of Internal Medicine III - Cardiology, Innsbruck (Austria); Burghard, Philipp; Mayr, Agnes; Klauser, Andrea; Feuchtner, Gudrun [Innsbruck Medical University, Department of Radiology, Innsbruck (Austria); Friedrich, Guy; Dichtl, Wolfgang [Innsbruck Medical University, Department of Internal Medicine III - Cardiology, Innsbruck (Austria); Wolf, Florian [Vienna Medical University, Department of Cardiovascular and Interventional Radiology, Vienna (Austria)

    2016-11-15

    To identify the most accurate quantitative coronary stenosis parameter by CTA for prediction of functional significant coronary stenosis resulting in coronary revascularization. 160 consecutive patients were prospectively examined with CTA. Proximal coronary stenosis was quantified by minimal lumen area (MLA) and minimal lumen diameter (MLD), %area and %diameter stenosis. Lesion length (LL) was measured. The reference standard was invasive coronary angiography (ICA) (>70 % stenosis, FFR <0.8). 210 coronary segments were included (59 % positive). MLA of ≤1.8 mm{sup 2} was identified as the optimal cut-off (c = 0.97, p < 0.001; 95 % CI 0.94-0.99) (sensitivity 90.9 %, specificity 89.3 %) for prediction of functional-relevant stenosis (for MLA >2.1 mm{sup 2} sensitivity was 100 %). The optimal cut-off for MLD was 1.2 mm (c = 0.92; p < 0.001; 95 % CI 0.88-95) (sensitivity 90.9, specificity 85.2) while %area and %diameter stenosis were less accurate (c = 0.89; 95 % CI 0.84-93, c = 0.87; 95 % CI 0.82-92, respectively, with thresholds at 73 % and 61 % stenosis). Accuracy for LL was c = 0.74 (95 % CI 0.67-81), and for LL/MLA and LL/MLD ratio c = 0.90 and c = 0.84. MLA ≤1.8 mm{sup 2} and MLD ≤1.2 mm are the most accurate cut-offs for prediction of haemodynamically significant stenosis by ICA, with a higher accuracy than relative % stenosis. (orig.)

  7. [SOX10 mutation is relevant to inner ear malformation in patients with Waardenburg syndrome].

    Science.gov (United States)

    Xu, G Y; Hao, Q Q; Zhong, L L; Ren, W; Yan, Y; Liu, R Y; Li, J N; Guo, W W; Zhao, H; Yang, S M

    2016-11-07

    Objective: To determine the relevance between the SOX 10 mutation and Waardenburg syndrome (WS) accompanied with inner ear abnormality by analyzing the inner ear imaging results and molecular and genetic results of the WS patients with the SOX 10 mutation. Methods: This study included 36 WS in patients during 2001 and 2015 in the department of otorhinolaryngology head and neck surgery, Chinese Peoples's Liberation Army General Hospital. The condition of the inner ear of each patient was assessed by analyzing HRCT scans of the temporal bone and MRI scans of the brain and internal auditory canal. Meanwhile, the possible pathogenic genes of WS, including SOX10, MITF , and PAX 3, were also screened. Patients were divided into two groups according to SOX 10 mutation.The Fisher accuracy test was used to determine statistical difference of inner ear deformation incidence between the two groups. Results: Among all 36 patients, 12 were found to have inner ear abnormality. Most abnormalities were posterior semicircular canal deformations, some accompanied with cochlear deformation and an enlarged vestibule. Among all patients, 9 patients were SOX 10 heterozygous mutation carriers, among which six showed bilateral inner ear abnormality. Fisher accuracy test results suggested a significant correlation between the SOX 10 mutation and inner ear abnormality in WS patients ( P =0.036). Conclusion: This study found that WS patients with the SOX 10 mutation are more likely to have deformed inner ears when compared to WS patients without the SOX 10 mutation.

  8. Explaining citizens’ perceptions of international climate-policy relevance

    International Nuclear Information System (INIS)

    Schleich, Joachim; Faure, Corinne

    2017-01-01

    This paper empirically analyses the antecedents of citizens’ perceptions of the relevance of international climate policy. Its use of representative surveys in the USA, China and Germany controls for different environmental attitudes and socio-economic factors between countries. The findings of the micro-econometric analysis suggest that the perceived relevance of international climate policy is positively affected by its perceived effectiveness, approval of the key topics discussed at international climate conferences, and environmental attitudes, but is not affected by perceived procedural justice. A higher level of perceived trust in international climate policy was positively related to perceived relevance in the USA and in China, but not in Germany. Citizens who felt that they were well informed and that their position was represented at climate summits were more likely to perceive international climate policy as relevant in China in particular. Generally, the results show only weak evidence of socio-demographic effects. - Highlights: • Perceptions of climate-policy relevance increase with perceptions of effectiveness. • In China and the USA, trust increases perceptions of climate-policy relevance. • Environmental attitudes are related to perceptions of climate-policy relevance. • In China, well-informed citizens perceive climate policy as more relevant. • Socio-demographics only weakly affect perceptions of climate-policy relevance.

  9. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences

    Directory of Open Access Journals (Sweden)

    Ji-Yong An

    2016-01-01

    Full Text Available We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM model and Local Phase Quantization (LPQ to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM, reducing the influence of noise using a Principal Component Analysis (PCA, and using a Relevance Vector Machine (RVM based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  10. Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.

    Science.gov (United States)

    Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu

    2017-09-01

    Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Assessment of the thematic accuracy of land cover maps

    DEFF Research Database (Denmark)

    Høhle, Joachim

    2015-01-01

    were applied (‘Decision Tree’ and ‘Support Vector Machine’) using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures...... methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width......Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (‘building’, ‘hedge and bush’, ‘grass’, ‘road and parking lot’, ‘tree’, ‘wall and car port’) had to be derived. Two classification methods...

  12. Systematic Review and Meta-Analysis of Diagnostic Accuracy of Serum Refractometry and Brix Refractometry for the Diagnosis of Inadequate Transfer of Passive Immunity in Calves.

    Science.gov (United States)

    Buczinski, S; Gicquel, E; Fecteau, G; Takwoingi, Y; Chigerwe, M; Vandeweerd, J M

    2018-01-01

    Transfer of passive immunity in calves can be assessed by direct measurement of immunoglobulin G (IgG) by methods such as radial immunodiffusion (RID) or turbidimetric immunoassay (TIA). IgG can also be measured indirectly by methods such as serum refractometry (REF) or Brix refractometry (BRIX). To determine the accuracy of REF and BRIX for assessment of inadequate transfer of passive immunity (ITPI) in calves. Systematic review and meta-analysis of diagnostic accuracy studies. Databases (PubMed and CAB Abstract, Searchable Proceedings of Animal Science) and Google Scholar were searched for relevant studies. Studies were eligible if the accuracy (sensitivity and specificity) of REF or BRIX was determined using direct measurement of IgG by RID or turbidimetry as the reference standard. The study population included calves refractometry, including the optimal cutoff, are sparse (especially for BRIX). When using REF to rule out ITPI in herds, the 5.5 g/dL cutoff may be used whereas for ruling in ITPI, the 5.2 g/dL cutoff may be used. Copyright © 2017 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  13. PENGARUH PROFITABILITAS DAN KEPEMILIKAN PUBLIK TERHADAP KETEPATAN WAKTU PENYAMPAIAN LAPORAN KEUANGAN

    Directory of Open Access Journals (Sweden)

    Denny Andriana

    2015-08-01

    Full Text Available This study aims to determine the effect of profitability and public ownership on the timeliness of financial reporting. Determination of the sample using purposive sampling method with total samples obtained as many as 363 companies listed on the Indonesia Stock Exchange for the period 2011 to 2013. Data analysis techniques used in this study is logistic regression. The test results show that profitability significantly influence the timeliness of financial statement submission. While public ownership has no significant influence on the timeliness of financial statement submission.

  14. Impact of the frequency of online verifications on the patient set-up accuracy and set-up margins

    Directory of Open Access Journals (Sweden)

    Mohamed Adel

    2011-08-01

    Full Text Available Abstract Purpose The purpose of the study was to evaluate the patient set-up error of different anatomical sites, to estimate the effect of different frequencies of online verifications on the patient set-up accuracy, and to calculate margins to accommodate for the patient set-up error (ICRU set-up margin, SM. Methods and materials Alignment data of 148 patients treated with inversed planned intensity modulated radiotherapy (IMRT or three-dimensional conformal radiotherapy (3D-CRT of the head and neck (n = 31, chest (n = 72, abdomen (n = 15, and pelvis (n = 30 were evaluated. The patient set-up accuracy was assessed using orthogonal megavoltage electronic portal images of 2328 fractions of 173 planning target volumes (PTV. In 25 patients, two PTVs were analyzed where the PTVs were located in different anatomical sites and treated in two different radiotherapy courses. The patient set-up error and the corresponding SM were retrospectively determined assuming no online verification, online verification once a week and online verification every other day. Results The SM could be effectively reduced with increasing frequency of online verifications. However, a significant frequency of relevant set-up errors remained even after online verification every other day. For example, residual set-up errors larger than 5 mm were observed on average in 18% to 27% of all fractions of patients treated in the chest, abdomen and pelvis, and in 10% of fractions of patients treated in the head and neck after online verification every other day. Conclusion In patients where high set-up accuracy is desired, daily online verification is highly recommended.

  15. Data reporting constraints for the lymphatic filariasis mass drug administration activities in two districts in Ghana: A qualitative study

    Directory of Open Access Journals (Sweden)

    Frances Baaba da-Costa Vroom

    2015-07-01

    Full Text Available Objectives: Timely and accurate health data are important for objective decision making and policy formulation. However, little evidence exists to explain why poor quality routine health data persist. This study examined the constraints to data reporting for the lymphatic filariasis mass drug administration programme in two districts in Ghana. This qualitative study focused on timeliness and accuracy of mass drug administration reports submitted by community health volunteers. Methods: The study is nested within a larger study focusing on the feasibility of mobile phone technology for the lymphatic filariasis programme. Using an exploratory study design, data were obtained through in-depth interviews (n = 7 with programme supervisors and focus group discussions (n = 4 with community health volunteers. Results were analysed using thematic content analysis. Results: Reasons for delays in reporting were attributed to poor numeracy skills among community health volunteers, difficult physical access to communities, high supervisor workload, poor adherence reporting deadlines, difficulty in reaching communities within allocated time and untimely release of programme funds. Poor accuracy of data was mainly attributed to inadequate motivation for community health volunteers and difficulty calculating summaries. Conclusion: This study has shown that there are relevant issues that need to be addressed in order to improve the quality of lymphatic filariasis treatment coverage reports. Some of the factors identified are problems within the health system; others are specific to the community health volunteers and the lymphatic filariasis programme. Steps such as training on data reporting should be intensified for community health volunteers, allowances for community health volunteers should be re-evaluated and other non-monetary incentives should be provided for community health volunteers.

  16. Technique for Increasing Accuracy of Positioning System of Machine Tools

    Directory of Open Access Journals (Sweden)

    Sh. Ji

    2014-01-01

    Full Text Available The aim of research is to improve the accuracy of positioning and processing system using a technique for optimization of pressure diagrams of guides in machine tools. The machining quality is directly related to its accuracy, which characterizes an impact degree of various errors of machines. The accuracy of the positioning system is one of the most significant machining characteristics, which allow accuracy evaluation of processed parts.The literature describes that the working area of the machine layout is rather informative to characterize the effect of the positioning system on the macro-geometry of the part surfaces to be processed. To enhance the static accuracy of the studied machine, in principle, two groups of measures are possible. One of them points toward a decrease of the cutting force component, which overturns the slider moments. Another group of measures is related to the changing sizes of the guide facets, which may lead to their profile change.The study was based on mathematical modeling and optimization of the cutting zone coordinates. And we find the formula to determine the surface pressure of the guides. The selected parameters of optimization are vectors of the cutting force and values of slides and guides. Obtained results show that a technique for optimization of coordinates in the cutting zone was necessary to increase a processing accuracy.The research has established that to define the optimal coordinates of the cutting zone we have to change the sizes of slides, value and coordinates of applied forces, reaching the pressure equalization and improving the accuracy of positioning system of machine tools. In different points of the workspace a vector of forces is applied, pressure diagrams are found, which take into account the changes in the parameters of positioning system, and the pressure diagram equalization to provide the most accuracy of machine tools is achieved.

  17. Accuracy improvement of irradiation data by combining ground and satellite measurements

    Energy Technology Data Exchange (ETDEWEB)

    Betcke, J. [Energy and Semiconductor Research Laboratory, Carl von Ossietzky University, Oldenburg (Germany); Beyer, H.G. [Department of Electrical Engineering, University of Applied Science (F.H.) Magdeburg-Stendal, Magdeburg (Germany)

    2004-07-01

    Accurate and site-specific irradiation data are essential input for optimal planning, monitoring and operation of solar energy technologies. A concrete example is the performance check of grid connected PV systems with the PVSAT-2 procedure. This procedure detects system faults in an early stage by a daily comparison of an individual reference yield with the actual yield. Calculation of the reference yield requires hourly irradiation data with a known accuracy. A field test of the predecessing PVSAT-1 procedure showed that the accuracy of the irradiation input is the determining factor for the overall accuracy of the yield calculation. In this paper we will investigate if it is possible to improve the accuracy of sitespeci.c irradiation data by combining accurate localised pyranometer data with semi-continuous satellite data.We will therefore introduce the ''Kriging of Differences'' data fusion method. Kriging of Differences also offers the possibility to estimate it's own accuracy. The obtainable accuracy gain and the effectiveness of the accuracy prediction will be investigated by validation on monthly and daily irradiation datasets. Results will be compared with the Heliosat method and interpolation of ground data. (orig.)

  18. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    Science.gov (United States)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  19. Accuracy of MFCC-Based Speaker Recognition in Series 60 Device

    Directory of Open Access Journals (Sweden)

    Pasi Fränti

    2005-10-01

    Full Text Available A fixed point implementation of speaker recognition based on MFCC signal processing is considered. We analyze the numerical error of the MFCC and its effect on the recognition accuracy. Techniques to reduce the information loss in a converted fixed point implementation are introduced. We increase the signal processing accuracy by adjusting the ratio of presentation accuracy of the operators and the signal. The signal processing error is found out to be more important to the speaker recognition accuracy than the error in the classification algorithm. The results are verified by applying the alternative technique to speech data. We also discuss the specific programming requirements set up by the Symbian and Series 60.

  20. National inventory of Global Change relevant research in Norway; Nasjonal kartlegging av global change-relevant forskning

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-05-01

    The Norwegian Global Change Committee has made an inventory of global change research (GCR) projects funded by the Research Council of Norway (RCN) in 2001. In lack of a rigid definition, GCR was defined as research that can be considered relevant to the science agenda of the four major international global change programmes DIVERSITAS, IGBP, IHDP and WCRP. Relevance was judged based on the objectives stated for each of the international programmes and their core projects. It was not attempted to check whether the projects had any kind of link to the programmes they were considered relevant for. The grants provided by the RCN in 2001 to GCR as defined above amounts to about 77 mill. NOK. Based on a recent survey on climate change research it is reasonable to estimate that the RCN finances between 30 and 40 % of all GCR in Norway. Accordingly, the total value of Norwegian research relevant to the four international global change programmes in 2001 can be estimated to 192 - 254 mill. NOK.

  1. Do Shared Interests Affect the Accuracy of Budgets?

    Directory of Open Access Journals (Sweden)

    Ilse Maria Beuren

    2015-04-01

    Full Text Available The creation of budgetary slack is a phenomenon associated with various behavioral aspects. This study focuses on accuracy in budgeting when the benefit of the slack is shared between the unit manager and his/her assistant. In this study, accuracy is measured by the level of slack in the budget, and the benefit of slack represents a financial consideration for the manager and the assistant. The study aims to test how shared interests in budgetary slack affect the accuracy of budget reports in an organization. To this end, an experimental study was conducted with a sample of 90 employees in management and other leadership positions at a cooperative that has a variable compensation plan based on the achievement of organizational goals. The experiment conducted in this study is consubstantiated by the study of Church, Hannan and Kuang (2012, which was conducted with a sample of undergraduate students in the United States and used a quantitative approach to analyze the results. In the first part of the experiment, the results show that when budgetary slack is not shared, managers tend to create greater slack when the assistant is not aware of the creation of slack; these managers thus generate a lower accuracy index than managers whose assistants are aware of the creation of slack. When budgetary slack is shared, there is higher average slack when the assistant is aware of the creation of slack. In the second part of the experiment, the accuracy index is higher for managers who prepare the budget with the knowledge that their assistants prefer larger slack values. However, the accuracy level differs between managers who know that their assistants prefer maximizing slack values and managers who do not know their assistants' preference regarding slack. These results contribute to the literature by presenting evidence of managers' behavior in the creation of budgetary slack in scenarios in which they share the benefits of slack with their assistants.

  2. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  3. Martial arts striking hand peak acceleration, accuracy and consistency.

    Science.gov (United States)

    Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A

    2013-01-01

    The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.

  4. Acquisition of decision making criteria: reward rate ultimately beats accuracy.

    Science.gov (United States)

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D

    2011-02-01

    Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.

  5. INFLUENCE OF STRUCTURE COMPONENTS ON MACHINE TOOL ACCURACY

    Directory of Open Access Journals (Sweden)

    ConstantinSANDU

    2017-11-01

    Full Text Available For machine tools, the accuracy of the parts of the machine tool structure (after roughing should be subject to relief and natural or artificial aging. The performance of the current accuracy of machine tools as linearity or flatness was higher than 5 μm/m. Under this value there are great difficulties. The performance of the structure of the machine tools in the manufacture of structural parts of machine tools, with a flatness accuracy that the linearity of about 2 μm/m, are significant deviations form of their half-finished. This article deals with the influence of errors of form of semifinished and machined parts on them, on their shape and especially what happens to structure machine tools when the components of the structure were assembling this.

  6. Inoculating Relevance Feedback Against Poison Pills

    NARCIS (Netherlands)

    Dehghani, Mostafa; Azarbonyad, Hosein; Kamps, Jaap; Hiemstra, Djoerd; Marx, Maarten

    2016-01-01

    Relevance Feedback is a common approach for enriching queries, given a set of explicitly or implicitly judged documents to improve the performance of the retrieval. Although it has been shown that on average, the overall performance of retrieval will be improved after relevance feedback, for some

  7. Three-dimensional repositioning accuracy of semiadjustable articulator cast mounting systems.

    Science.gov (United States)

    Tan, Ming Yi; Ung, Justina Youlin; Low, Ada Hui Yin; Tan, En En; Tan, Keson Beng Choon

    2014-10-01

    In spite of its importance in prosthesis precision and quality, the 3-dimensional repositioning accuracy of cast mounting systems has not been reported in detail. The purpose of this study was to quantify the 3-dimensional repositioning accuracy of 6 selected cast mounting systems. Five magnetic mounting systems were compared with a conventional screw-on system. Six systems on 3 semiadjustable articulators were evaluated: Denar Mark II with conventional screw-on mounting plates (DENSCR) and magnetic mounting system with converter plates (DENCON); Denar Mark 330 with in-built magnetic mounting system (DENMAG) and disposable mounting plates; and Artex CP with blue (ARTBLU), white (ARTWHI), and black (ARTBLA) magnetic mounting plates. Test casts with 3 high-precision ceramic ball bearings at the mandibular central incisor (Point I) and the right and left second molar (Point R; Point L) positions were mounted on 5 mounting plates (n=5) for all 6 systems. Each cast was repositioned 10 times by 4 operators in random order. Nine linear (Ix, Iy, Iz; Rx, Ry, Rz; Lx, Ly, Lz) and 3 angular (anteroposterior, mediolateral, twisting) displacements were measured with a coordinate measuring machine. The mean standard deviations of the linear and angular displacements defined repositioning accuracy. Anteroposterior linear repositioning accuracy ranged from 23.8 ±3.7 μm (DENCON) to 4.9 ±3.2 μm (DENSCR). Mediolateral linear repositioning accuracy ranged from 46.0 ±8.0 μm (DENCON) to 3.7 ±1.5 μm (ARTBLU), and vertical linear repositioning accuracy ranged from 7.2 ±9.6 μm (DENMAG) to 1.5 ±0.9 μm (ARTBLU). Anteroposterior angular repositioning accuracy ranged from 0.0084 ±0.0080 degrees (DENCON) to 0.0020 ±0.0006 degrees (ARTBLU), and mediolateral angular repositioning accuracy ranged from 0.0120 ±0.0111 degrees (ARTWHI) to 0.0027 ±0.0008 degrees (ARTBLU). Twisting angular repositioning accuracy ranged from 0.0419 ±0.0176 degrees (DENCON) to 0.0042 ±0.0038 degrees

  8. New developments in measurements technology relevant to the studies of deep geological repositories in bedded salt

    International Nuclear Information System (INIS)

    Mao, N.; Ramirez, A.L.

    1980-01-01

    This report presents new developments in measurement technology relevant to the studies of deep geological repositories for nuclear waste disposal during all phases of development, i.e., site selection, site characterization, construction, operation, and decommission. Emphasis has been placed on geophysics and geotechnics with special attention to those techniques applicable to bedded salt. The techniques are grouped into sections as follows: tectonic environment, state of stress, subsurface structures, fractures, stress changes, deformation, thermal properties, fluid transport properties, and other approaches. Several areas that merit further research and developments are identified. These areas are: in situ thermal measurement techniques, fracture detection and characterization, in situ stress measurements, and creep behavior. The available instrumentations should generally be improved to have better resolution and accuracy, enhanced instrument survivability, and reliability for extended time periods in a hostile environment

  9. The Collection of Event Data and its Relevance to the Optimisation of Decay Heat Rejection Systems

    International Nuclear Information System (INIS)

    Roughley, R.; Jones, N.

    1975-01-01

    The precision with which the reliability of DHR (Decay Heat Rejection) systems for nuclear reactors can be predicted depends not only upon model representation but also on the accuracy of the data used. In the preliminary design stages when models are being used to arrive at major engineering decisions in relation to plant configuration, the best the designer can do is use the data available at the time. With the present state of the art it is acknowledged that some degree of judgement will have to be exercised particularly for plant involving sodium technology where a large amount of operational experience has not yet been generated. This paper reviews the current efforts being deployed in the acquisition of field data relevant to DHR systems so that improvements in reliability predictions may be realised

  10. New developments in measurements technology relevant to the studies of deep geological repositories in bedded salt

    Science.gov (United States)

    Mao, N. H.; Ramirez, A. L.

    1980-10-01

    Developments in measurement technology are presented which are relevant to the studies of deep geological repositories for nuclear waste disposal during all phases of development, i.e., site selection, site characterization, construction, operation, and decommission. Emphasis was placed on geophysics and geotechnics with special attention to those techniques applicable to bedded salt. The techniques are grouped into sections as follows: tectonic environment, state of stress, subsurface structures, fractures, stress changes, deformation, thermal properties, fluid transport properties, and other approaches. Several areas that merit further research and developments are identified. These areas are: in situ thermal measurement techniques, fracture detection and characterization, in situ stress measurements, and creep behavior. The available instrumentations should generally be improved to have better resolution and accuracy, enhanced instrument survivability, and reliability for extended time periods in a hostile environment.

  11. Autonomia e relevância dos regimes The autonomy and relevance of regimes

    Directory of Open Access Journals (Sweden)

    Gustavo Seignemartin de Carvalho

    2005-12-01

    Full Text Available Teorias institucionalistas na disciplina de relações internacionais usualmente definem regimes como um conjunto de normas e regras formais ou informais que permitem a convergência de expectativas ou a padronização do comportamento de seus participantes em uma determinada área de interesses com o objetivo de resolver problemas de coordenação que tenderiam a resultados não pareto-eficientes. Como estas definições baseadas meramente na "eficiência" dos regimes não parecem suficientes para explicar sua efetividade, o presente artigo propõe uma definição diferente para regimes: a de arranjos políticos que permitem a redistribuição dos ganhos da cooperação pelos participantes em uma determinada área de interesses em um contexto de interdependência. Regimes possuiriam efetividade pela sua autonomia e relevância, ou seja, por possuírem existência objetiva autônoma da de seus participantes e por influenciarem seu comportamento e expectativas de maneiras que não podem ser reduzidas à ação individual de nenhum deles. O artigo inicia-se com uma breve discussão sobre as dificuldades terminológicas associadas ao estudo de regimes e a definição dos conceitos de autonomia e relevância. Em seguida, classifica os diversos autores participantes do debate em duas perspectivas distintas, uma que nega (não-autonomistas e outra que atribui (autonomistas aos regimes autonomia e relevância, e faz uma breve análise dos autores e tradições mais significativos para o debate, aprofundando-se nos autonomistas e nos argumentos que reforçam a hipótese aqui apresentada. Ao final, o artigo propõe uma decomposição analítica dos regimes nos quatro elementos principais que lhes propiciam autonomia e relevância: normatividade, atores, especificidade da área de interesses e interdependência complexa com o contexto.Regimes are defined by institutionalist theories in the discipline of International Relations as formal or informal sets

  12. Quantifying the Accuracy of a Diagnostic Test or Marker

    NARCIS (Netherlands)

    Linnet, Kristian; Bossuyt, Patrick M. M.; Moons, Karel G. M.; Reitsma, Johannes B. R.

    2012-01-01

    BACKGROUND: In recent years, increasing focus has been directed to the methodology for evaluating (new) tests or biomarkers. A key step in the evaluation of a diagnostic test is the investigation into its accuracy. CONTENT: We reviewed the literature on how to assess the accuracy of diagnostic

  13. Perceptual Load Affects Eyewitness Accuracy & Susceptibility to Leading Questions

    Directory of Open Access Journals (Sweden)

    Gillian Murphy

    2016-08-01

    Full Text Available Load Theory (Lavie, 1995; 2005 states that the level of perceptual load in a task (i.e. the amount of information involved in processing task-relevant stimuli determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator, the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.

  14. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  15. Clinical relevance in anesthesia journals

    DEFF Research Database (Denmark)

    Lauritsen, Jakob; Møller, Ann M

    2006-01-01

    The purpose of this review is to present the latest knowledge and research on the definition and distribution of clinically relevant articles in anesthesia journals. It will also discuss the importance of the chosen methodology and outcome of articles.......The purpose of this review is to present the latest knowledge and research on the definition and distribution of clinically relevant articles in anesthesia journals. It will also discuss the importance of the chosen methodology and outcome of articles....

  16. Causal and Epistemic Relevance in Appeals to Authority

    Directory of Open Access Journals (Sweden)

    Sebastiano Lommi

    2015-05-01

    Full Text Available Appeals to authority have a long tradition in the history of argumentation theory. During the Middle Age they were considered legitimate and sound arguments, but after Locke’s treatment in the Essay Concerning Human Understanding their legitimacy has come under question. Traditionally, arguments from authority were considered informal arguments, but since the important work of Charles Hamblin (Hamblin, 1970 many attempts to provide a form for them have been done. The most convincing of them is the presumptive form developed by Douglas Walton and John Woods (Woods, Walton, 1974 that aims at taking into account the relevant contextual aspects in assessing the provisional validity of an appeal to authority. The soundness of an appeal depends on its meeting the adequacy conditions set to scrutinize all the relevant questions. I want to claim that this approach is compatible with the analysis of arguments in terms of relevance advanced by David Hitchcock (Hitchcock, 1992. He claims that relevance is a triadic relation between two items and a context. The first item is relevant to the second one in a given context. Different types of relevance relation exist, namely causal relevance and epistemic relevance. “Something is [causally] relevant to an outcome in a given situation if it helps to cause that outcome in the situation” (Hitchcock, 1992, p. 253, whereas it is epistemically relevant when it helps to achieve an epistemic goal in a given situation. I claim that we can adapt this conception to Walton and Krabbe’s theory of dialogue type (Walton, Krabbe, 1995, seeing the items of a relevance relation as the argument and its consequence and the context as the type of dialogue in which these arguments are advanced. According to this perspective, an argument from authority that meets the adequacy conditions has to be considered legitimate because it is an epistemically relevant relation. Therefore, my conclusion is that an analysis of appeals to

  17. Making academic research more relevant: A few suggestions

    Directory of Open Access Journals (Sweden)

    Abinash Panda

    2014-09-01

    Full Text Available Academic research in the domain of management scholarship, though steeped in scientific and methodological rigour, is generally found to be of little relevance to practice. The authors of this paper have revisited the rigour-relevance debate in light of recent developments and with special reference to the management research scenario in India. The central thesis of the argument is that the gulf between rigour and relevance needs to be bridged to make academic research more relevant to business organizations and practitioners. They have offered some suggestions to enhance the relevance of academic research to practice.

  18. Perceptual Load Affects Eyewitness Accuracy and Susceptibility to Leading Questions.

    Science.gov (United States)

    Murphy, Gillian; Greene, Ciara M

    2016-01-01

    Load Theory (Lavie, 1995, 2005) states that the level of perceptual load in a task (i.e., the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator), the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.

  19. The hidden KPI registration accuracy.

    Science.gov (United States)

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  20. Assessing the Accuracy of Ancestral Protein Reconstruction Methods

    OpenAIRE

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-01-01

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolu...

  1. Accuracy of stereolithographic models of human anatomy

    International Nuclear Information System (INIS)

    Barker, T.M.; Earwaker, W.J.S.; Lisle, D.A.

    1994-01-01

    A study was undertaken to determine the dimensional accuracy of anatomical replicas derived from X-ray 3D computed tomography (CT) images and produced using the rapid prototyping technique of stereolithography (SLA). A dry bone skull and geometric phantom were scanned, and replicas were produced. Distance measurements were obtained to compare the original objects and the resulting replicas. Repeated measurements between anatomical landmarks were used for comparison of the original skull and replica. Results for the geometric phantom demonstrate a mean difference of +0.47mm, representing an accuracy of 97.7-99.12%. Measurements of the skull produced a range of absolute differences (maximum +4.62mm, minimum +0.1mm, mean +0.85mm). These results support the use of SLA models of human anatomical structures in such areas as pre-operative planning of complex surgical procedures. For applications where higher accuracy is required, improvements can be expected by utilizing smaller pixel resolution in the CT images. Stereolithographic models can now be confidently employed as accurate, three-dimensional replicas of complex, anatomical structures. 14 refs., 2 tabs., 8 figs

  2. Hadroproduction of t anti-t pair in association with an isolated photon at NLO accuracy matched with parton shower

    Science.gov (United States)

    Kardos, Adam; Trócsányi, Zoltán

    2015-05-01

    We simulate the hadroproduction of a -pair in association with a hard photon at LHC using the PowHel package. These events are almost fully inclusive with respect to the photon, allowing for any physically relevant isolation of the photon. We use the generated events, stored according to the Les-Houches event format, to make predictions for differential distributions formally at the next-to-leading order (NLO) accuracy and we compare these to existing predictions accurate at NLO using the smooth isolation prescription of Frixione. Our fixed-order predictions include the direct-photon contribution only. We also make predictions for distributions after full parton shower and hadronization using the standard experimental cone-isolation of the photon.

  3. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    Science.gov (United States)

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  4. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    Directory of Open Access Journals (Sweden)

    Xiaodong Zeng

    2014-01-01

    Full Text Available A weighted accuracy and diversity (WAD method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  5. Assessing accuracy of an electronic provincial medication repository

    Directory of Open Access Journals (Sweden)

    Price Morgan

    2012-05-01

    Full Text Available Abstract Background Jurisdictional drug information systems are being implemented in many regions around the world. British Columbia, Canada has had a provincial medication dispensing record, PharmaNet, system since 1995. Little is known about how accurately PharmaNet reflects actual medication usage. Methods This prospective, multi-centre study compared pharmacist collected Best Possible Medication Histories (BPMH to PharmaNet profiles to assess accuracy of the PharmaNet profiles for patients receiving a BPMH as part of clinical care. A review panel examined the anonymized BPMHs and discrepancies to estimate clinical significance of discrepancies. Results 16% of medication profiles were accurate, with 48% of the discrepant profiles considered potentially clinically significant by the clinical review panel. Cardiac medications tended to be more accurate (e.g. ramipril was accurate >90% of the time, while insulin, warfarin, salbutamol and pain relief medications were often inaccurate (80–85% of the time. 1215 sequential BPMHs were collected and reviewed for this study. Conclusions The PharmaNet medication repository has a low accuracy and should be used in conjunction with other sources for medication histories for clinical or research purposes. This finding is consistent with other, smaller medication repository accuracy studies in other jurisdictions. Our study highlights specific medications that tend to be lower in accuracy.

  6. Accuracy of radiographer reporting of paediatric brain CT

    International Nuclear Information System (INIS)

    Brandt, Andrew; Louw, Brand; Dekker, Gerrit; Andronikou, Savvas; Wieselthaler, Nicki; Kilborn, Tracy; Bertelsman, Jessica; Dreyer, Catherine

    2007-01-01

    Radiographer reporting has been studied for plain films and for ultrasonography, but not in paediatric brain CT in the emergency setting. To study the accuracy of radiographer reporting in paediatric brain CT. We prospectively collected 100 paediatric brain CT examinations. Films were read from hard copies using a prescribed tick sheet. Radiographers with 12 years' and 3 years' experience, respectively, were blinded to the history and were not trained in diagnostic film interpretation. The radiographers' results were compared with those of a consultant radiologist. Three categories were defined: abnormal scans, significant abnormalities and insignificant abnormalities. Both radiographers had an accuracy of 89.5% in reading a scan correctly as abnormal, and radiographer 1 had a sensitivity of 87.8% and radiographer 2 a sensitivity of 96%. Radiographer 1 had an accuracy in detecting a significant abnormality of 75% and radiographer 2 an accuracy of 48.6%, and the sensitivities for this category were 61.6% and 52.9%, respectively. Results for detecting the insignificant abnormalities were poorer. Selected radiographers could play an effective screening role, but lacking the sensitivity required for detecting significant abnormality, they could not be the final diagnostician. We recommend that the study be repeated after both radiographers have received formal training in interpretation of paediatric brain CT. (orig.)

  7. The Attentional Demand of Automobile Driving Revisited: Occlusion Distance as a Function of Task-Relevant Event Density in Realistic Driving Scenarios.

    Science.gov (United States)

    Kujala, Tuomo; Mäkelä, Jakke; Kotilainen, Ilkka; Tokkonen, Timo

    2016-02-01

    We studied the utility of occlusion distance as a function of task-relevant event density in realistic traffic scenarios with self-controlled speed. The visual occlusion technique is an established method for assessing visual demands of driving. However, occlusion time is not a highly informative measure of environmental task-relevant event density in self-paced driving scenarios because it partials out the effects of changes in driving speed. Self-determined occlusion times and distances of 97 drivers with varying backgrounds were analyzed in driving scenarios simulating real Finnish suburban and highway traffic environments with self-determined vehicle speed. Occlusion distances varied systematically with the expected environmental demands of the manipulated driving scenarios whereas the distributions of occlusion times remained more static across the scenarios. Systematic individual differences in the preferred occlusion distances were observed. More experienced drivers achieved better lane-keeping accuracy than inexperienced drivers with similar occlusion distances; however, driving experience was unexpectedly not a major factor for the preferred occlusion distances. Occlusion distance seems to be an informative measure for assessing task-relevant event density in realistic traffic scenarios with self-controlled speed. Occlusion time measures the visual demand of driving as the task-relevant event rate in time intervals, whereas occlusion distance measures the experienced task-relevant event density in distance intervals. The findings can be utilized in context-aware distraction mitigation systems, human-automated vehicle interaction, road speed prediction and design, as well as in the testing of visual in-vehicle tasks for inappropriate in-vehicle glancing behaviors in any dynamic traffic scenario for which appropriate individual occlusion distances can be defined. © 2015, Human Factors and Ergonomics Society.

  8. A Compositional Relevance Model for Adaptive Information Retrieval

    Science.gov (United States)

    Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)

    1994-01-01

    There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

  9. Inferring relevance in a changing world

    Directory of Open Access Journals (Sweden)

    Robert C Wilson

    2012-01-01

    Full Text Available Reinforcement learning models of human and animal learning usually concentrate on how we learn the relationship between different stimuli or actions and rewards. However, in real world situations stimuli are ill-defined. On the one hand, our immediate environment is extremely multi-dimensional. On the other hand, in every decision-making scenario only a few aspects of the environment are relevant for obtaining reward, while most are irrelevant. Thus a key question is how do we learn these relevant dimensions, that is, how do we learn what to learn about? We investigated this process of representation learning experimentally, using a task in which one stimulus dimension was relevant for determining reward at each point in time. As in real life situations, in our task the relevant dimension can change without warning, adding ever-present uncertainty engendered by a constantly changing environment. We show that human performance on this task is better described by a suboptimal strategy based on selective attention and serial hypothesis testing rather than a normative strategy based on probabilistic inference. From this, we conjecture that the problem of inferring relevance in general scenarios is too computationally demanding for the brain to solve optimally. As a result the brain utilizes approximations, employing these even in simplified scenarios in which optimal representation learning is tractable, such as the one in our experiment.

  10. Analysis on Dynamic Transmission Accuracy for RV Reducer

    Directory of Open Access Journals (Sweden)

    Zhang Fengshou

    2017-01-01

    Full Text Available By taking rotate vector (RV reducer as the research object, the factors affecting the transmission accuracy are studied, including the machining errors of the main parts, assembly errors, clearance, micro-displacement, gear mesh stiffness and damping, bearing stiffness. Based on Newton second law, the transmission error mathematical model of RV reducer is set up. Then, the RV reducer transmission error curve is achieved by solving the mathematical model using the Runge-Kutta methods under the combined action of various error factors. Through the analysis of RV reducer transmission test, it can be found that there are similar variation trend and frequency components compared the theoretical research and experimental result. The presented method is useful to the research on dynamic transmission accuracy of RV reducer, and also applies to research the transmission accuracy of other cycloid drive systems.

  11. Estimate of cryoscopic calculations accuracy from fusibility diagrams

    International Nuclear Information System (INIS)

    Viting, L.M.; Gorbovskaya, G.P.

    1975-01-01

    The melting points of some lead and zinc salts, that can be used as solvents for ferrites in systems: PbMoO 4 -MgFe 2 O 4 , Zn 2 V 2 O 7 -NiFe 2 O 4 , Pb 3 (VO 4 ) 2 -MgFe 2 O 4 , have been calculated in accordance with the hypotetical mechanism of the solvent dissociation. The accuracy of cryoscopic calculations based on melting point curves is evaluated. Cryoscopic calculations permit to determin the solvent activity with the accuracy of +-0.3% and the heat of its fusion, with the accuracy of +-3%. The comparison of the calculated and experimental values of the entropy of melting, as well as calculated and experimental values of the cryoscopic constant elucidates the mechanism of dissociation of both the dissolved compound and the solvent

  12. Error and Uncertainty in the Accuracy Assessment of Land Cover Maps

    Science.gov (United States)

    Sarmento, Pedro Alexandre Reis

    Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None

  13. Trading speed and accuracy by coding time: a coupled-circuit cortical model.

    Directory of Open Access Journals (Sweden)

    Dominic Standage

    2013-04-01

    Full Text Available Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by 'climbing' activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification.

  14. DIRECT GEOREFERENCING : A NEW STANDARD IN PHOTOGRAMMETRY FOR HIGH ACCURACY MAPPING

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2012-07-01

    Full Text Available Direct georeferencing is a new method in photogrammetry, especially in the digital camera era. Theoretically, this method does not require ground control points (GCP and the Aerial Triangulation (AT, to process aerial photography into ground coordinates. Compared with the old method, this method has three main advantages: faster data processing, simple workflow and less expensive project, at the same accuracy. Direct georeferencing using two devices, GPS and IMU. GPS recording the camera coordinates (X, Y, Z, and IMU recording the camera orientation (omega, phi, kappa. Both parameters merged into Exterior Orientation (EO parameter. This parameters required for next steps in the photogrammetric projects, such as stereocompilation, DSM generation, orthorectification and mosaic. Accuracy of this method was tested on topographic map project in Medan, Indonesia. Large-format digital camera Ultracam X from Vexcel is used, while the GPS / IMU is IGI AeroControl. 19 Independent Check Point (ICP were used to determine the accuracy. Horizontal accuracy is 0.356 meters and vertical accuracy is 0.483 meters. Data with this accuracy can be used for 1:2.500 map scale project.

  15. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    Directory of Open Access Journals (Sweden)

    Frederik Coomans

    Full Text Available We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses.

  16. POC CD4 Testing Improves Linkage to HIV Care and Timeliness of ART Initiation in a Public Health Approach: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Lara Vojnov

    Full Text Available CD4 cell count is an important test in HIV programs for baseline risk assessment, monitoring of ART where viral load is not available, and, in many settings, antiretroviral therapy (ART initiation decisions. However, access to CD4 testing is limited, in part due to the centralized conventional laboratory network. Point of care (POC CD4 testing has the potential to address some of the challenges of centralized CD4 testing and delays in delivery of timely testing and ART initiation. We conducted a systematic review and meta-analysis to identify the extent to which POC improves linkages to HIV care and timeliness of ART initiation.We searched two databases and four conference sites between January 2005 and April 2015 for studies reporting test turnaround times, proportion of results returned, and retention associated with the use of point-of-care CD4. Random effects models were used to estimate pooled risk ratios, pooled proportions, and 95% confidence intervals.We identified 30 eligible studies, most of which were completed in Africa. Test turnaround times were reduced with the use of POC CD4. The time from HIV diagnosis to CD4 test was reduced from 10.5 days with conventional laboratory-based testing to 0.1 days with POC CD4 testing. Retention along several steps of the treatment initiation cascade was significantly higher with POC CD4 testing, notably from HIV testing to CD4 testing, receipt of results, and pre-CD4 test retention (all p<0.001. Furthermore, retention between CD4 testing and ART initiation increased with POC CD4 testing compared to conventional laboratory-based testing (p = 0.01. We also carried out a non-systematic review of the literature observing that POC CD4 increased the projected life expectancy, was cost-effective, and acceptable.POC CD4 technologies reduce the time and increase patient retention along the testing and treatment cascade compared to conventional laboratory-based testing. POC CD4 is, therefore, a useful tool

  17. Does an inter-flaw length control the accuracy of rupture forecasting in geological materials?

    Science.gov (United States)

    Vasseur, Jérémie; Wadsworth, Fabian B.; Heap, Michael J.; Main, Ian G.; Lavallée, Yan; Dingwell, Donald B.

    2017-10-01

    Multi-scale failure of porous materials is an important phenomenon in nature and in material physics - from controlled laboratory tests to rockbursts, landslides, volcanic eruptions and earthquakes. A key unsolved research question is how to accurately forecast the time of system-sized catastrophic failure, based on observations of precursory events such as acoustic emissions (AE) in laboratory samples, or, on a larger scale, small earthquakes. Until now, the length scale associated with precursory events has not been well quantified, resulting in forecasting tools that are often unreliable. Here we test the hypothesis that the accuracy of the forecast failure time depends on the inter-flaw distance in the starting material. We use new experimental datasets for the deformation of porous materials to infer the critical crack length at failure from a static damage mechanics model. The style of acceleration of AE rate prior to failure, and the accuracy of forecast failure time, both depend on whether the cracks can span the inter-flaw length or not. A smooth inverse power-law acceleration of AE rate to failure, and an accurate forecast, occurs when the cracks are sufficiently long to bridge pore spaces. When this is not the case, the predicted failure time is much less accurate and failure is preceded by an exponential AE rate trend. Finally, we provide a quantitative and pragmatic correction for the systematic error in the forecast failure time, valid for structurally isotropic porous materials, which could be tested against larger-scale natural failure events, with suitable scaling for the relevant inter-flaw distances.

  18. A systematic review of the precision and accuracy of dose measurements in photon radiotherapy using polymer and Fricke MRI gel dosimetry

    International Nuclear Information System (INIS)

    MacDougall, N.D.; Pitchford, W.G.; Smith, M.A.

    2002-01-01

    1998 Radiother. Oncol. 48 283-91, Farajollahi et al 2000 Br. J. Radiol. 72 1085-92, McJury et al 1999b Phys. Med. Biol. 44 2431-44, Murphy et al 2000b Phys. Med. Biol. 45 835-45, Oldham et al 2001 Med. Phys. 28 1436-45) and 5% for Fricke gel (Chan and Ayyangar 1995b Med. Phys. 22 1171-5). Evidence also points to accuracy worsening at lower dose levels for both gels. The precision data should be viewed with caution as repeated MR measurements were not performed with the same samples. The only precision data for Fricke gels was 1.5% (Johansson Back et al 1998 Phys. Med. Biol. 43 261-76), but for zero dose. In conclusion, despite the amount of published data, sparse research has been undertaken which provides clear evidence of the accuracy and precision for both gels. That which has been published has used higher doses than would be routine in radiotherapy. The basic radiation dosimeter qualities of accuracy and precision have yet to be fully quantified for polymer and Fricke gels at clinically relevant dose levels. (author)

  19. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    Science.gov (United States)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  20. Accuracy of prehospital transport time estimation.

    Science.gov (United States)

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  1. The policy relevance of global environmental change research

    International Nuclear Information System (INIS)

    Yarnal, Brent

    1996-01-01

    Many scientists are striving to identify and promote the policy implications of their global change research. Much basic research on global environmental change cannot advance policy directly, but new projects can determine the relevance of their research to decision makers and build policy-relevant products into the work. Similarly, many ongoing projects can alter or add to the present science design to make the research policy relevant. Thus, this paper shows scientists working on global change how to make their research policy relevant. It demonstrates how research on physical global change relates to human dimensions studies and integrated assessments. It also presents an example of how policy relevance can be fit retroactively into a global change project (in this case, SRBEX-the Susquehanna River Basin Experiment) and how that addition can enhance the project's status and science. The paper concludes that policy relevance is desirable from social and scientific perspectives

  2. Does relevance matter in academic policy research?

    DEFF Research Database (Denmark)

    Dredge, Dianne

    2015-01-01

    A reflection on whether relevance matters in tourism policy research. A debate among tourism scholars.......A reflection on whether relevance matters in tourism policy research. A debate among tourism scholars....

  3. Customer satisfaction in anatomic pathology. A College of American Pathologists Q-Probes study of 3065 physician surveys from 94 laboratories.

    Science.gov (United States)

    Zarbo, Richard J; Nakhleh, Raouf E; Walsh, Molly

    2003-01-01

    relevant information, teaching conferences and courses, notification of significant abnormal results, and timeliness of reporting). The database of 3065 physician surveys was derived from 94 laboratories. An average of 32.6 surveys (median 30) was returned per institution, with a range of 5 to 50 surveys per institution. The mean response rate was 35.6% (median 32.5%). The median (50th percentile) laboratory had an overall median satisfaction score of 4.4. The lowest satisfaction scores that were obtained all related to poor communication, which included timeliness of reporting, communication of relevant information, and notification of significant abnormal results. Statistically significant associations of customer satisfaction with certain institutional characteristics and laboratory performance improvement activities were identified. The importance of this satisfaction survey lies not in its requirement as an exercise for accrediting agencies but in understanding the needs of the customer (in this case the physician) to direct performance improvement in the delivery of quality anatomic pathology laboratory services.

  4. Accuracy Analysis of a Box-wing Theoretical SRP Model

    Science.gov (United States)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  5. An accuracy measurement method for star trackers based on direct astronomic observation.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  6. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  7. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  8. Determination of fuel irradiation parameters. Required accuracies and available methods

    International Nuclear Information System (INIS)

    Mas, P.

    1977-01-01

    This paper reports on the present point of some main methods to determine the nuclear parameters of fuel irradiation in testing reactors (nuclear power, burn up, ...) The different methods (theoretical or experimental) are reviewed: neutron measurements and calculations, gamma scanning, heat balance, ... . The required accuracies are reviewed: they are of 3-5 % on flux, fluences, nuclear power, burn-up, conversion factor. These required accuracies are compared with the real accuracies available which are the present time of order of 5-20 % on these parameters

  9. Determination of material irradiation parameters. Required accuracies and available methods

    International Nuclear Information System (INIS)

    Cerles, J.M.; Mas, P.

    1978-01-01

    In this paper, the author reports some main methods to determine the nuclear parameters of material irradiation in testing reactor (nuclear power, burn-up, fluxes, fluences, ...). The different methods (theoretical or experimental) are reviewed: neutronics measurements and calculations, gamma scanning, thermal balance, ... The required accuracies are reviewed: they are of 3-5% on flux, fluences, nuclear power, burn-up, conversion factor, ... These required accuracies are compared with the real accuracies available which are at the present time of order of 5-20% on these parameters

  10. Pengaruh Ukuran Perusahaan, Profitability, Dan Debt Equity Ratio Terhadap Ketepatan Waktu Pelaporan Keuangan (Studi Pada Perusahaan Sektor Industri Food And Beverages Dan Sektor Industri Tekstile Yang Terdaftar Di Bursa Efek Indonesia

    Directory of Open Access Journals (Sweden)

    Ine Aprianti

    2017-07-01

    Full Text Available The research objective was to show how the development of the growth of company size, profitability and debt to equity ratio of the timeliness of financial reporting. This study uses deskiptif research and verification. With data analysis techniques using parametric statistics. The population in this study amounted to 43 companies sector of food and beverages industry and textile industry sectors in the study period of 2006 to 2008. That was then acquired 15 companies in the sample using purposive sampling method of sample collection techniques with particular consideration.The data used in this research is secondary data derived from the Corner Exchange ITB and www.idx.co.id official website that the company's financial statement data sectors of food and beverages industry and textile industry sectors listed on the Stock Exchange. The results of multiple linear regression t test showed that the variables of profitability and debt to equity affect the timeliness of financial reporting, while the size of the company does not affect the timeliness of financial reporting. In the F test shows that together variables company size, profitability and debt to equity affect the timeliness of financial reporting. Keywords: debt to equity ratio, timeliness, firm size, profitability, financial reporting.

  11. Hardware accuracy counters for application precision and quality feedback

    Science.gov (United States)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani; Huang, Wei; Arora, Manish; Greathouse, Joseph L.

    2018-06-05

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be output to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.

  12. Accuracy Constraint Determination in Fixed-Point System Design

    Directory of Open Access Journals (Sweden)

    Serizel R

    2008-01-01

    Full Text Available Most of digital signal processing applications are specified and designed with floatingpoint arithmetic but are finally implemented using fixed-point architectures. Thus, the design flow requires a floating-point to fixed-point conversion stage which optimizes the implementation cost under execution time and accuracy constraints. This accuracy constraint is linked to the application performances and the determination of this constraint is one of the key issues of the conversion process. In this paper, a method is proposed to determine the accuracy constraint from the application performance. The fixed-point system is modeled with an infinite precision version of the system and a single noise source located at the system output. Then, an iterative approach for optimizing the fixed-point specification under the application performance constraint is defined and detailed. Finally the efficiency of our approach is demonstrated by experiments on an MP3 encoder.

  13. Phenomenological reports diagnose accuracy of eyewitness identification decisions.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil; McKinnon, Anna C; Weber, Nathan

    2010-02-01

    This study investigated whether measuring the phenomenology of eyewitness identification decisions aids evaluation of their accuracy. Witnesses (N=502) viewed a simulated crime and attempted to identify two targets from lineups. A divided attention manipulation during encoding reduced the rate of remember (R) correct identifications, but not the rates of R foil identifications or know (K) judgments in the absence of recollection (i.e., K/[1-R]). Both RK judgments and recollection ratings (a novel measure of graded recollection) distinguished correct from incorrect positive identifications. However, only recollection ratings improved accuracy evaluation after identification confidence was taken into account. These results provide evidence that RK judgments for identification decisions function in a similar way as for recognition decisions; are consistent with the notion of graded recollection; and indicate that measures of phenomenology can enhance the evaluation of identification accuracy. Copyright 2009 Elsevier B.V. All rights reserved.

  14. Value Driven Outcomes (VDO): a pragmatic, modular, and extensible software framework for understanding and improving health care costs and outcomes.

    Science.gov (United States)

    Kawamoto, Kensaku; Martin, Cary J; Williams, Kip; Tu, Ming-Chieh; Park, Charlton G; Hunter, Cheri; Staes, Catherine J; Bray, Bruce E; Deshmukh, Vikrant G; Holbrook, Reid A; Morris, Scott J; Fedderson, Matthew B; Sletta, Amy; Turnbull, James; Mulvihill, Sean J; Crabtree, Gordon L; Entwistle, David E; McKenna, Quinn L; Strong, Michael B; Pendleton, Robert C; Lee, Vivian S

    2015-01-01

    To develop expeditiously a pragmatic, modular, and extensible software framework for understanding and improving healthcare value (costs relative to outcomes). In 2012, a multidisciplinary team was assembled by the leadership of the University of Utah Health Sciences Center and charged with rapidly developing a pragmatic and actionable analytics framework for understanding and enhancing healthcare value. Based on an analysis of relevant prior work, a value analytics framework known as Value Driven Outcomes (VDO) was developed using an agile methodology. Evaluation consisted of measurement against project objectives, including implementation timeliness, system performance, completeness, accuracy, extensibility, adoption, satisfaction, and the ability to support value improvement. A modular, extensible framework was developed to allocate clinical care costs to individual patient encounters. For example, labor costs in a hospital unit are allocated to patients based on the hours they spent in the unit; actual medication acquisition costs are allocated to patients based on utilization; and radiology costs are allocated based on the minutes required for study performance. Relevant process and outcome measures are also available. A visualization layer facilitates the identification of value improvement opportunities, such as high-volume, high-cost case types with high variability in costs across providers. Initial implementation was completed within 6 months, and all project objectives were fulfilled. The framework has been improved iteratively and is now a foundational tool for delivering high-value care. The framework described can be expeditiously implemented to provide a pragmatic, modular, and extensible approach to understanding and improving healthcare value. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  15. Post-encoding control of working memory enhances processing of relevant information in rhesus monkeys (Macaca mulatta).

    Science.gov (United States)

    Brady, Ryan J; Hampton, Robert R

    2018-06-01

    Working memory is a system by which a limited amount of information can be kept available for processing after the cessation of sensory input. Because working memory resources are limited, it is adaptive to focus processing on the most relevant information. We used a retro-cue paradigm to determine the extent to which monkey working memory possesses control mechanisms that focus processing on the most relevant representations. Monkeys saw a sample array of images, and shortly after the array disappeared, they were visually cued to a location that had been occupied by one of the sample images. The cue indicated which image should be remembered for the upcoming recognition test. By determining whether the monkeys were more accurate and quicker to respond to cued images compared to un-cued images, we tested the hypothesis that monkey working memory focuses processing on relevant information. We found a memory benefit for the cued image in terms of accuracy and retrieval speed with a memory load of two images. With a memory load of three images, we found a benefit in retrieval speed but only after shortening the onset latency of the retro-cue. Our results demonstrate previously unknown flexibility in the cognitive control of memory in monkeys, suggesting that control mechanisms in working memory likely evolved in a common ancestor of humans and monkeys more than 32 million years ago. Future work should be aimed at understanding the interaction between memory load and the ability to control memory resources, and the role of working memory control in generating differences in cognitive capacity among primates. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Hybrid Radar Emitter Recognition Based on Rough k-Means Classifier and Relevance Vector Machine

    Science.gov (United States)

    Yang, Zhutian; Wu, Zhilu; Yin, Zhendong; Quan, Taifan; Sun, Hongjian

    2013-01-01

    Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for recognizing radar emitter signals. In this paper, a hybrid recognition approach is presented that classifies radar emitter signals by exploiting the different separability of samples. The proposed approach comprises two steps, namely the primary signal recognition and the advanced signal recognition. In the former step, a novel rough k-means classifier, which comprises three regions, i.e., certain area, rough area and uncertain area, is proposed to cluster the samples of radar emitter signals. In the latter step, the samples within the rough boundary are used to train the relevance vector machine (RVM). Then RVM is used to recognize the samples in the uncertain area; therefore, the classification accuracy is improved. Simulation results show that, for recognizing radar emitter signals, the proposed hybrid recognition approach is more accurate, and presents lower computational complexity than traditional approaches. PMID:23344380

  17. Accuracy of magnetic resonance in identifying traumatic intraarticular knee lesions

    International Nuclear Information System (INIS)

    Vaz, Carlos Eduardo Sanches; Camargo, Olavo Pires de; Santana, Paulo Jose de; Valezi, Antonio Carlos

    2005-01-01

    Purpose: To evaluate the diagnostic accuracy of magnetic resonance imaging of the knee in identifying traumatic intraarticular knee lesions. Method: 300 patients with a clinical diagnosis of traumatic intraarticular knee lesions underwent prearthoscopic magnetic resonance imaging. The sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive test, likelihood ratio for a negative test, and accuracy of magnetic resonance imaging were calculated relative to the findings during arthroscopy in the studied structures of the knee (medial meniscus, lateral meniscus, anterior cruciate ligament, posterior cruciate ligament, and articular cartilage). Results: Magnetic resonance imaging produced the following results regarding detection of lesions: medial meniscus: sensitivity 97.5%, specificity 92.9%, positive predictive value 93.9%, positive negative value 97%, likelihood positive ratio 13.7, likelihood negative ratio 0.02, and accuracy 95.3%; lateral meniscus: sensitivity 91.9%, specificity 93.6%, positive predictive value 92.7%, positive negative value 92.9%, likelihood positive ratio 14.3, likelihood negative ratio 0.08, and accuracy 93.6%; anterior cruciate ligament: sensitivity 99.0%, specificity 95.9%, positive predictive value 91.9%, positive negative value 99.5%, likelihood positive ratio 21.5, likelihood negative ratio 0.01, and accuracy 96.6%; posterior cruciate ligament: sensitivity 100%, specificity 99%, positive predictive value 80.0%, positive negative value 100%, likelihood positive ratio 100, likelihood negative ratio 0.01, and accuracy 99.6%; articular cartilage: sensitivity 76.1%, specificity 94.9%, positive predictive value 94.7%, positive negative value 76.9%, likelihood positive ratio 14.9, likelihood negative ratio 0.25, and accuracy 84.6%. Conclusion: Magnetic resonance imaging is a satisfactory diagnostic tool for evaluating meniscal and ligamentous lesions of the knee, but it is unable to clearly

  18. Accuracy of magnetic resonance in identifying traumatic intraarticular knee lesions

    Directory of Open Access Journals (Sweden)

    Vaz Carlos Eduardo Sanches

    2005-01-01

    Full Text Available PURPOSE: To evaluate the diagnostic accuracy of magnetic resonance imaging of the knee in identifying traumatic intraarticular knee lesions. METHOD: 300 patients with a clinical diagnosis of traumatic intraarticular knee lesions underwent prearthoscopic magnetic resonance imaging. The sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive test, likelihood ratio for a negative test, and accuracy of magnetic resonance imaging were calculated relative to the findings during arthroscopy in the studied structures of the knee (medial meniscus, lateral meniscus, anterior cruciate ligament, posterior cruciate ligament, and articular cartilage. RESULTS: Magnetic resonance imaging produced the following results regarding detection of lesions: medial meniscus: sensitivity 97.5%, specificity 92.9%, positive predictive value 93.9%, positive negative value 97%, likelihood positive ratio 13.7, likelihood negative ratio 0.02, and accuracy 95.3%; lateral meniscus: sensitivity 91.9%, specificity 93.6%, positive predictive value 92.7%, positive negative value 92.9%, likelihood positive ratio 14.3, likelihood negative ratio 0.08, and accuracy 93.6%; anterior cruciate ligament: sensitivity 99.0%, specificity 95.9%, positive predictive value 91.9%, positive negative value 99.5%, likelihood positive ratio 21.5, likelihood negative ratio 0.01, and accuracy 96.6%; posterior cruciate ligament: sensitivity 100%, specificity 99%, positive predictive value 80.0%, positive negative value 100%, likelihood positive ratio 100, likelihood negative ratio 0.01, and accuracy 99.6%; articular cartilage: sensitivity 76.1%, specificity 94.9%, positive predictive value 94.7%, positive negative value 76.9%, likelihood positive ratio 14.9, likelihood negative ratio 0.25, and accuracy 84.6%. CONCLUSION: Magnetic resonance imaging is a satisfactory diagnostic tool for evaluating meniscal and ligamentous lesions of the knee, but it is

  19. Radioactivity analysis of food and accuracy control

    International Nuclear Information System (INIS)

    Ota, Tomoko

    2013-01-01

    From the fact that radioactive substances have been detected from the foods such as agricultural and livestock products and marine products due to the accident of the Fukushima Daiichi Nuclear Power Station of Tokyo Electric Power Company, the Ministry of Health, Labour and Welfare stipulated new standards geared to general foods on radioactive cesium by replacing the interim standards up to now. Various institutions began to measure radioactivity on the basis of this instruction, but as a new challenge, a problem of the reliability of the data occurred. Therefore, accuracy control to indicate the proof that the quality of the data can be retained at an appropriate level judging from an objective manner is important. In order to consecutively implement quality management activities, it is necessary for each inspection agency to build an accuracy control system. This paper introduces support service, as a new attempt, for establishing the accuracy control system. This service is offered jointly by three organizations, such as TUV Rheinland Japan Ltd., Japan Frozen Foods Inspection Corporation, and Japan Chemical Analysis Center. This service consists of the training of radioactivity measurement practitioners, proficiency test for radioactive substance measurement, and personal authentication. (O.A.)

  20. Impacts of land use/cover classification accuracy on regional climate simulations

    Science.gov (United States)

    Ge, Jianjun; Qi, Jiaguo; Lofgren, Brent M.; Moore, Nathan; Torbick, Nathan; Olson, Jennifer M.

    2007-03-01

    Land use/cover change has been recognized as a key component in global change. Various land cover data sets, including historically reconstructed, recently observed, and future projected, have been used in numerous climate modeling studies at regional to global scales. However, little attention has been paid to the effect of land cover classification accuracy on climate simulations, though accuracy assessment has become a routine procedure in land cover production community. In this study, we analyzed the behavior of simulated precipitation in the Regional Atmospheric Modeling System (RAMS) over a range of simulated classification accuracies over a 3 month period. This study found that land cover accuracy under 80% had a strong effect on precipitation especially when the land surface had a greater control of the atmosphere. This effect became stronger as the accuracy decreased. As shown in three follow-on experiments, the effect was further influenced by model parameterizations such as convection schemes and interior nudging, which can mitigate the strength of surface boundary forcings. In reality, land cover accuracy rarely obtains the commonly recommended 85% target. Its effect on climate simulations should therefore be considered, especially when historically reconstructed and future projected land covers are employed.