WorldWideScience

Sample records for total scoring algorithms

  1. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  2. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  3. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  4. BitPAl: a bit-parallel, general integer-scoring sequence alignment algorithm.

    Science.gov (United States)

    Loving, Joshua; Hernandez, Yozen; Benson, Gary

    2014-11-15

    Mapping of high-throughput sequencing data and other bulk sequence comparison applications have motivated a search for high-efficiency sequence alignment algorithms. The bit-parallel approach represents individual cells in an alignment scoring matrix as bits in computer words and emulates the calculation of scores by a series of logic operations composed of AND, OR, XOR, complement, shift and addition. Bit-parallelism has been successfully applied to the longest common subsequence (LCS) and edit-distance problems, producing fast algorithms in practice. We have developed BitPAl, a bit-parallel algorithm for general, integer-scoring global alignment. Integer-scoring schemes assign integer weights for match, mismatch and insertion/deletion. The BitPAl method uses structural properties in the relationship between adjacent scores in the scoring matrix to construct classes of efficient algorithms, each designed for a particular set of weights. In timed tests, we show that BitPAl runs 7-25 times faster than a standard iterative algorithm. Source code is freely available for download at http://lobstah.bu.edu/BitPAl/BitPAl.html. BitPAl is implemented in C and runs on all major operating systems. jloving@bu.edu or yhernand@bu.edu or gbenson@bu.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  5. The novel EuroSCORE II algorithm predicts the hospital mortality of thoracic aortic surgery in 461 consecutive Japanese patients better than both the original additive and logistic EuroSCORE algorithms.

    Science.gov (United States)

    Nishida, Takahiro; Sonoda, Hiromichi; Oishi, Yasuhisa; Tanoue, Yoshihisa; Nakashima, Atsuhiro; Shiokawa, Yuichi; Tominaga, Ryuji

    2014-04-01

    The European System for Cardiac Operative Risk Evaluation (EuroSCORE) II was developed to improve the overestimation of surgical risk associated with the original (additive and logistic) EuroSCOREs. The purpose of this study was to evaluate the significance of the EuroSCORE II by comparing its performance with that of the original EuroSCOREs in Japanese patients undergoing surgery on the thoracic aorta. We have calculated the predicted mortalities according to the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II algorithms in 461 patients who underwent surgery on the thoracic aorta during a period of 20 years (1993-2013). The actual in-hospital mortality rates in the low- (additive EuroSCORE of 3-6), moderate- (7-11) and high-risk (≥11) groups (followed by overall mortality) were 1.3, 6.2 and 14.4% (7.2% overall), respectively. Among the three different risk groups, the expected mortality rates were 5.5 ± 0.6, 9.1 ± 0.7 and 13.5 ± 0.2% (9.5 ± 0.1% overall) by the additive EuroSCORE algorithm, 5.3 ± 0.1, 16 ± 0.4 and 42.4 ± 1.3% (19.9 ± 0.7% overall) by the logistic EuroSCORE algorithm and 1.6 ± 0.1, 5.2 ± 0.2 and 18.5 ± 1.3% (7.4 ± 0.4% overall) by the EuroSCORE II algorithm, indicating poor prediction (P algorithms were 0.6937, 0.7169 and 0.7697, respectively. Thus, the mortality expected by the EuroSCORE II more closely matched the actual mortality in all three risk groups. In contrast, the mortality expected by the logistic EuroSCORE overestimated the risks in the moderate- (P = 0.0002) and high-risk (P < 0.0001) patient groups. Although all of the original EuroSCOREs and EuroSCORE II appreciably predicted the surgical mortality for thoracic aortic surgery in Japanese patients, the EuroSCORE II best predicted the mortalities in all risk groups.

  6. Computerized scoring algorithms for the Autobiographical Memory Test.

    Science.gov (United States)

    Takano, Keisuke; Gutenbrunner, Charlotte; Martens, Kris; Salmon, Karen; Raes, Filip

    2018-02-01

    Reduced specificity of autobiographical memories is a hallmark of depressive cognition. Autobiographical memory (AM) specificity is typically measured by the Autobiographical Memory Test (AMT), in which respondents are asked to describe personal memories in response to emotional cue words. Due to this free descriptive responding format, the AMT relies on experts' hand scoring for subsequent statistical analyses. This manual coding potentially impedes research activities in big data analytics such as large epidemiological studies. Here, we propose computerized algorithms to automatically score AM specificity for the Dutch (adult participants) and English (youth participants) versions of the AMT by using natural language processing and machine learning techniques. The algorithms showed reliable performances in discriminating specific and nonspecific (e.g., overgeneralized) autobiographical memories in independent testing data sets (area under the receiver operating characteristic curve > .90). Furthermore, outcome values of the algorithms (i.e., decision values of support vector machines) showed a gradient across similar (e.g., specific and extended memories) and different (e.g., specific memory and semantic associates) categories of AMT responses, suggesting that, for both adults and youth, the algorithms well capture the extent to which a memory has features of specific memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    Science.gov (United States)

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  8. Highlights of TOMS Version 9 Total Ozone Algorithm

    Science.gov (United States)

    Bhartia, Pawan; Haffner, David

    2012-01-01

    The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side

  9. a locally adapted functional outcome measurement score for total

    African Journals Online (AJOL)

    Results and success of total hip arthroplasty are often measured using a functional outcome scoring system. Most current scores were developed in Europe and. North America (1-3). During the evaluation of a Total. Hip Replacement (THR) project in Ouagadougou,. Burkina Faso (4) it was felt that these scores were not.

  10. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  11. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  12. CERAD Neuropsychological Total Scores Reflect Cortical Thinning in Prodromal Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    T. Paajanen

    2013-11-01

    Full Text Available Background: Sensitive cognitive global scores are beneficial in screening and monitoring for prodromal Alzheimer's disease (AD. Early cortical changes provide a novel opportunity for validating established cognitive total scores against the biological disease markers. Methods: We examined how two different total scores of the Consortium to Establish a Registry for Alzheimer's Disease (CERAD battery and the Mini-Mental State Examination (MMSE are associated with cortical thickness (CTH in mild cognitive impairment (MCI and prodromal AD. Cognitive and magnetic resonance imaging (MRI data of 22 progressive MCI, 78 stable MCI, and 98 control subjects, and MRI data of 103 AD patients of the prospective multicenter study were analyzed. Results: CERAD total scores correlated with mean CTH more strongly (r = 0.34-0.38, p Conclusion: CERAD total scores are sensitive to the CTH signature of prodromal AD, which supports their biological validity in detecting early disease-related cognitive changes.

  13. Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss

    NARCIS (Netherlands)

    Susyanto, N.; Veldhuis, R.N.J.; Spreeuwers, L.J.; Klaassen, C.A.J.; Fierrez, J.; Li, S.Z.; Ross, A.; Veldhuis, R.; Alonso-Fernandez, F.; Bigun, J.

    2016-01-01

    We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its

  14. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  15. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  16. An Overview of the Total Lightning Jump Algorithm: Past, Present and Future Work

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.; Deierling, Wiebke; Kessinger, Cathy

    2011-01-01

    Rapid increases in total lightning prior to the onset of severe and hazardous weather have been observed for several decades. These rapid increases are known as lightning jumps and can precede the occurrence of severe weather by tens of minutes. Over the past decade, a significant effort has been made to quantify lightning jump behavior in relation to its utility as a predictor of severe and hazardous weather. Based on a study of 34 thunderstorms that occurred in the Tennessee Valley, early work conducted in our group at Huntsville determined that it was indeed possible to create a reasonable operational lightning jump algorithm (LJA) based on a statistical framework relying on the variance behavior of the lightning trending signal. We the expanded this framework and tested several variance-related LJA configurations on a much larger sample of 87 severe and non severe thunderstorms. This study determined that a configuration named the "2(sigma)" algorithm had the most promise in development of the operational LJA with a probability of detection (POD) of 87%, a false alarm rate (FAR) of 33%, a Heidke Skill Score (HSS) of 0.75. The 2(sigma) algorithm was then tested on an even larger sample of 711 thunderstorms of all types from four regions of the country where total lightning measurement capability existed. The result was very encouraging.Despite the larger number of storms and the inclusion of different regions of the country, the POD remained high (79%), the FAR was low (36%) and HSS was solid (0.71). Average lead time from jump to severe weather occurrence was 20.65 minutes, with a standard deviation of +/- 15 minutes. Also, trends in total lightning were compared to cloud to ground (CG) lightning trends, and it was determined that total lightning trends had a higher POD (79% vs 66%), lower FAR (36% vs 54 %) and a better HSS (0.71 vs 0.55). From the 711-storm case study it was determined that a majority of missed events were due to severe weather producing

  17. Optimizing multiple sequence alignments using a genetic algorithm based on three objectives: structural information, non-gaps percentage and totally conserved columns.

    Science.gov (United States)

    Ortuño, Francisco M; Valenzuela, Olga; Rojas, Fernando; Pomares, Hector; Florido, Javier P; Urquiza, Jose M; Rojas, Ignacio

    2013-09-01

    Multiple sequence alignments (MSAs) are widely used approaches in bioinformatics to carry out other tasks such as structure predictions, biological function analyses or phylogenetic modeling. However, current tools usually provide partially optimal alignments, as each one is focused on specific biological features. Thus, the same set of sequences can produce different alignments, above all when sequences are less similar. Consequently, researchers and biologists do not agree about which is the most suitable way to evaluate MSAs. Recent evaluations tend to use more complex scores including further biological features. Among them, 3D structures are increasingly being used to evaluate alignments. Because structures are more conserved in proteins than sequences, scores with structural information are better suited to evaluate more distant relationships between sequences. The proposed multiobjective algorithm, based on the non-dominated sorting genetic algorithm, aims to jointly optimize three objectives: STRIKE score, non-gaps percentage and totally conserved columns. It was significantly assessed on the BAliBASE benchmark according to the Kruskal-Wallis test (P algorithm also outperforms other aligners, such as ClustalW, Multiple Sequence Alignment Genetic Algorithm (MSA-GA), PRRP, DIALIGN, Hidden Markov Model Training (HMMT), Pattern-Induced Multi-sequence Alignment (PIMA), MULTIALIGN, Sequence Alignment Genetic Algorithm (SAGA), PILEUP, Rubber Band Technique Genetic Algorithm (RBT-GA) and Vertical Decomposition Genetic Algorithm (VDGA), according to the Wilcoxon signed-rank test (P 0.05) with the advantage of being able to use less structures. Structural information is included within the objective function to evaluate more accurately the obtained alignments. The source code is available at http://www.ugr.es/~fortuno/MOSAStrE/MO-SAStrE.zip.

  18. Actigraphy-based sleep estimation in adolescents and adults: a comparison with polysomnography using two scoring algorithms

    Directory of Open Access Journals (Sweden)

    Quante M

    2018-01-01

    sensitive (0.88–0.96 to detect sleep, but less specific (0.35–0.64 to detect wake than the Sadeh algorithm (sensitivity: 0.82–0.91, specificity: 0.47–0.68. Total sleep time measured using the GT3X+ with both algorithms was similar to that obtained by PSG (ICC=0.64–0.88. In contrast, agreement between the GT3X+ and PSG wake after sleep onset was poor (ICC=0.00–0.10. In adults, the GT3X+ using the Cole–Kripke algorithm provided data comparable to the AWS (mean bias=3.7±19.7 minutes for total sleep time and 8.0±14.2 minutes for wake after sleep onset.Conclusion: The two actigraphs provided comparable and accurate data compared to PSG, although both poorly identified wake episodes (i.e., had low specificity. Use of actigraphy scoring algorithm influenced the mean bias and level of agreement in sleep–wake times estimates. The GT3X+, when analyzed by the Cole–Kripke, but not the Sadeh algorithm, provided comparable data to the AWS. Keywords: validation, actigraphy, polysomnography, scoring algorithm  

  19. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  20. Acute Radiation Syndrome Severity Score System in Mouse Total-Body Irradiation Model.

    Science.gov (United States)

    Ossetrova, Natalia I; Ney, Patrick H; Condliffe, Donald P; Krasnopolsky, Katya; Hieber, Kevin P

    2016-08-01

    Radiation accidents or terrorist attacks can result in serious consequences for the civilian population and for military personnel responding to such emergencies. The early medical management situation requires quantitative indications for early initiation of cytokine therapy in individuals exposed to life-threatening radiation doses and effective triage tools for first responders in mass-casualty radiological incidents. Previously established animal (Mus musculus, Macaca mulatta) total-body irradiation (γ-exposure) models have evaluated a panel of radiation-responsive proteins that, together with peripheral blood cell counts, create a multiparametic dose-predictive algorithm with a threshold for detection of ~1 Gy from 1 to 7 d after exposure as well as demonstrate the acute radiation syndrome severity score systems created similar to the Medical Treatment Protocols for Radiation Accident Victims developed by Fliedner and colleagues. The authors present a further demonstration of the acute radiation sickness severity score system in a mouse (CD2F1, males) TBI model (1-14 Gy, Co γ-rays at 0.6 Gy min) based on multiple biodosimetric endpoints. This includes the acute radiation sickness severity Observational Grading System, survival rate, weight changes, temperature, peripheral blood cell counts and radiation-responsive protein expression profile: Flt-3 ligand, interleukin 6, granulocyte-colony stimulating factor, thrombopoietin, erythropoietin, and serum amyloid A. Results show that use of the multiple-parameter severity score system facilitates identification of animals requiring enhanced monitoring after irradiation and that proteomics are a complementary approach to conventional biodosimetry for early assessment of radiation exposure, enhancing accuracy and discrimination index for acute radiation sickness response categories and early prediction of outcome.

  1. Empirical validation of the S-Score algorithm in the analysis of gene expression data

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2006-03-01

    Full Text Available Abstract Background Current methods of analyzing Affymetrix GeneChip® microarray data require the estimation of probe set expression summaries, followed by application of statistical tests to determine which genes are differentially expressed. The S-Score algorithm described by Zhang and colleagues is an alternative method that allows tests of hypotheses directly from probe level data. It is based on an error model in which the detected signal is proportional to the probe pair signal for highly expressed genes, but approaches a background level (rather than 0 for genes with low levels of expression. This model is used to calculate relative change in probe pair intensities that converts probe signals into multiple measurements with equalized errors, which are summed over a probe set to form the S-Score. Assuming no expression differences between chips, the S-Score follows a standard normal distribution, allowing direct tests of hypotheses to be made. Using spike-in and dilution datasets, we validated the S-Score method against comparisons of gene expression utilizing the more recently developed methods RMA, dChip, and MAS5. Results The S-score showed excellent sensitivity and specificity in detecting low-level gene expression changes. Rank ordering of S-Score values more accurately reflected known fold-change values compared to other algorithms. Conclusion The S-score method, utilizing probe level data directly, offers significant advantages over comparisons using only probe set expression summaries.

  2. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    Science.gov (United States)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  3. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  4. Automatic Algorithm for the Determination of the Anderson-wilkins Acuteness Score In Patients With St Elevation Myocardial Infarction

    DEFF Research Database (Denmark)

    Fakhri, Yama; Sejersten, Maria; Schoos, Mikkel Malby

    2016-01-01

    using 50 ECGs. Each ECG lead (except aVR) was manually scored according to AW-score by two independent experts (Exp1 and Exp2) and automatically by our designed algorithm (auto-score). An adjudicated manual score (Adj-score) was determined between Exp1 and Exp2. The inter-rater reliabilities (IRRs...

  5. Scoring the full extent of periodontal disease in the dog: development of a total mouth periodontal score (TMPS) system.

    Science.gov (United States)

    Harvey, Colin E; Laster, Larry; Shofer, Frances; Miller, Bonnie

    2008-09-01

    The development of a total mouth periodontal scoring system is described. This system uses methods to score the full extent of gingivitis and periodontitis of all tooth surfaces, weighted by size of teeth, and adjusted by size of dog.

  6. Fractional-Order Total Variation Image Restoration Based on Primal-Dual Algorithm

    OpenAIRE

    Chen, Dali; Chen, YangQuan; Xue, Dingyu

    2013-01-01

    This paper proposes a fractional-order total variation image denoising algorithm based on the primal-dual method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, convergence rate, and blocky effect. The fractional-order total variation model is introduced by generalizing the first-order model, and the corresponding saddle-point and dual formulation are constructed in theory. In order to guarantee $O(1/{N}^{2})$ conv...

  7. Solar Backscatter UV (SBUV total ozone and profile algorithm

    Directory of Open Access Journals (Sweden)

    P. K. Bhartia

    2013-10-01

    Full Text Available We describe the algorithm that has been applied to develop a 42 yr record of total ozone and ozone profiles from eight Solar Backscatter UV (SBUV instruments launched on NASA and NOAA satellites since April 1970. The Version 8 (V8 algorithm was released more than a decade ago and has been in use since then at NOAA to produce their operational ozone products. The current algorithm (V8.6 is basically the same as V8, except for updates to instrument calibration, incorporation of new ozone absorption cross-sections, and new ozone and cloud height climatologies. Since the V8 algorithm has been optimized for deriving monthly zonal mean (MZM anomalies for ozone assessment and model comparisons, our emphasis in this paper is primarily on characterizing the sources of errors that are relevant for such studies. When data are analyzed this way the effect of some errors, such as vertical smoothing of short-term variability, and noise due to clouds and aerosols diminish in importance, while the importance of others, such as errors due to vertical smoothing of the quasi-biennial oscillation (QBO and other periodic and aperiodic variations, become more important. With V8.6 zonal mean data we now provide smoothing kernels that can be used to compare anomalies in SBUV profile and partial ozone columns with models. In this paper we show how to use these kernels to compare SBUV data with Microwave Limb Sounder (MLS ozone profiles. These kernels are particularly useful for comparisons in the lower stratosphere where SBUV profiles have poor vertical resolution but partial column ozone values have high accuracy. We also provide our best estimate of the smoothing errors associated with SBUV MZM profiles. Since smoothing errors are the largest source of uncertainty in these profiles, they can be treated as error bars in deriving interannual variability and trends using SBUV data and for comparing with other measurements. In the V8 and V8.6 algorithms we derive total

  8. Cross-cultural adaptation and validation of Persian Achilles tendon Total Rupture Score.

    Science.gov (United States)

    Ansari, Noureddin Nakhostin; Naghdi, Soofia; Hasanvand, Sahar; Fakhari, Zahra; Kordi, Ramin; Nilsson-Helander, Katarina

    2016-04-01

    To cross-culturally adapt the Achilles tendon Total Rupture Score (ATRS) to Persian language and to preliminary evaluate the reliability and validity of a Persian ATRS. A cross-sectional and prospective cohort study was conducted to translate and cross-culturally adapt the ATRS to Persian language (ATRS-Persian) following steps described in guidelines. Thirty patients with total Achilles tendon rupture and 30 healthy subjects participated in this study. Psychometric properties of floor/ceiling effects (responsiveness), internal consistency reliability, test-retest reliability, standard error of measurement (SEM), smallest detectable change (SDC), construct validity, and discriminant validity were tested. Factor analysis was performed to determine the ATRS-Persian structure. There were no floor or ceiling effects that indicate the content and responsiveness of ATRS-Persian. Internal consistency was high (Cronbach's α 0.95). Item-total correlations exceeded acceptable standard of 0.3 for the all items (0.58-0.95). The test-retest reliability was excellent [(ICC)agreement 0.98]. SEM and SDC were 3.57 and 9.9, respectively. Construct validity was supported by a significant correlation between the ATRS-Persian total score and the Persian Foot and Ankle Outcome Score (PFAOS) total score and PFAOS subscales (r = 0.55-0.83). The ATRS-Persian significantly discriminated between patients and healthy subjects. Explanatory factor analysis revealed 1 component. The ATRS was cross-culturally adapted to Persian and demonstrated to be a reliable and valid instrument to measure functional outcomes in Persian patients with Achilles tendon rupture. II.

  9. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  10. Adaptive Proximal Point Algorithms for Total Variation Image Restoration

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2015-02-01

    Full Text Available Image restoration is a fundamental problem in various areas of imaging sciences. This paper presents a class of adaptive proximal point algorithms (APPA with contraction strategy for total variational image restoration. In each iteration, the proposed methods choose an adaptive proximal parameter matrix which is not necessary symmetric. In fact, there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction. And the inner extrapolation is implemented by an adaptive scheme. By using the framework of contraction method, global convergence result and a convergence rate of O(1/N could be established for the proposed methods. Numerical results are reported to illustrate the efficiency of the APPA methods for solving total variation image restoration problems. Comparisons with the state-of-the-art algorithms demonstrate that the proposed methods are comparable and promising.

  11. Sorting variables for each case: a new algorithm to calculate injury severity score (ISS) using SPSS-PC.

    Science.gov (United States)

    Linn, S

    One of the more often used measures of multiple injuries is the injury severity score (ISS). Determination of the ISS is based on the abbreviated injury scale (AIS). This paper suggests a new algorithm to sort the AISs for each case and calculate ISS. The program uses unsorted abbreviated injury scale (AIS) levels for each case and rearranges them in descending order. The first three sorted AISs representing the three most severe injuries of a person are then used to calculate injury severity score (ISS). This algorithm should be useful for analyses of clusters of injuries especially when more patients have multiple injuries.

  12. Total Mini-Mental State Examination score and regional cerebral blood flow using Z score imaging and automated ROI analysis software in subjects with memory impairment

    International Nuclear Information System (INIS)

    Ikeda, Eiji; Shiozaki, Kazumasa; Takahashi, Nobukazu; Togo, Takashi; Odawara, Toshinari; Oka, Takashi; Inoue, Tomio; Hirayasu, Yoshio

    2008-01-01

    The Mini-Mental State Examination (MMSE) is considered a useful supplementary method to diagnose dementia and evaluate the severity of cognitive disturbance. However, the region of the cerebrum that correlates with the MMSE score is not clear. Recently, a new method was developed to analyze regional cerebral blood flow (rCBF) using a Z score imaging system (eZIS). This system shows changes of rCBF when compared with a normal database. In addition, a three-dimensional stereotaxic region of interest (ROI) template (3DSRT), fully automated ROI analysis software was developed. The objective of this study was to investigate the correlation between rCBF changes and total MMSE score using these new methods. The association between total MMSE score and rCBF changes was investigated in 24 patients (mean age±standard deviation (SD) 71.5±9.2 years; 6 men and 18 women) with memory impairment using eZIS and 3DSRT. Step-wise multiple regression analysis was used for multivariate analysis, with the total MMSE score as the dependent variable and rCBF change in 24 areas as the independent variable. Total MMSE score was significantly correlated only with the reduction of left hippocampal perfusion but not with right (P<0.01). Total MMSE score is an important indicator of left hippocampal function. (author)

  13. Sensitivity and Specificity of the Coma Recovery Scale--Revised Total Score in Detection of Conscious Awareness.

    Science.gov (United States)

    Bodien, Yelena G; Carlowicz, Cecilia A; Chatelle, Camille; Giacino, Joseph T

    2016-03-01

    To describe the sensitivity and specificity of Coma Recovery Scale-Revised (CRS-R) total scores in detecting conscious awareness. Data were retrospectively extracted from the medical records of patients enrolled in a specialized disorders of consciousness (DOC) program. Sensitivity and specificity analyses were completed using CRS-R-derived diagnoses of minimally conscious state (MCS) or emerged from minimally conscious state (EMCS) as the reference standard for conscious awareness and the total CRS-R score as the test criterion. A receiver operating characteristic curve was constructed to demonstrate the optimal CRS-R total cutoff score for maximizing sensitivity and specificity. Specialized DOC program. Patients enrolled in the DOC program (N=252, 157 men; mean age, 49y; mean time from injury, 48d; traumatic etiology, n=127; nontraumatic etiology, n=125; diagnosis of coma or vegetative state, n=70; diagnosis of MCS or EMCS, n=182). Not applicable. Sensitivity and specificity of CRS-R total scores in detecting conscious awareness. A CRS-R total score of 10 or higher yielded a sensitivity of .78 for correct identification of patients in MCS or EMCS, and a specificity of 1.00 for correct identification of patients who did not meet criteria for either of these diagnoses (ie, were diagnosed with vegetative state or coma). The area under the curve in the receiver operating characteristic curve analysis is .98. A total CRS-R score of 10 or higher provides strong evidence of conscious awareness but resulted in a false-negative diagnostic error in 22% of patients who demonstrated conscious awareness based on CRS-R diagnostic criteria. A cutoff score of 8 provides the best balance between sensitivity and specificity, accurately classifying 93% of cases. The optimal total score cutoff will vary depending on the user's objective. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.

    Science.gov (United States)

    Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A

    2016-03-01

    Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.

  15. Coping strategies related to total stress score among post graduate medical students and residents

    Directory of Open Access Journals (Sweden)

    R. Irawati Ismail

    2013-05-01

    several dominant coping strategies related to total stress score levels.Methods:A cross-sectional purposive sampling method study among postgraduate medical students of the Faculty of Medicine, Universitas Indonesia was done April-July 2011. We used a coping strategies questionnaire and the WHO SRQ-20. Linear regression was used to identify dominant coping strategies related to stress levels.Results:This study had 272 subjects, aged 23-47 years. Four items decreased the total stress score (accepting the reality of the fact, talking to someone who could do something, seeking God’s help, and laughing about the situation. However, three factors increased the total stress score (taking one step at a time has to be done, talking to someone to find out more about the situation, and admitting can’t deal solving the situation. One point of accepting the reality of the situation reduced 0.493 points the total stress score [regression coefficient (β= -0.493; P=0.002]. While one point seeking God’s help reduced 0.307 points the total stress score (β= -0.307; P=0.056. However, one point of doing one step at a time increased 0.54 point the total stress score (β=0.540; P=0.005.Conclusions: Accepting the reality of the situation, talking to someone who could do something, seeking God’s help, and laughing about the situation decreased the stress level. However, taking one step at a time, talking to someone to find out more about the situation and admitting can’t deal solving the situation, increased the total stress score.Key words:stress level, coping strategies, age, seeking God’s help

  16. SF-36 total score as a single measure of health-related quality of life: Scoping review.

    Science.gov (United States)

    Lins, Liliane; Carvalho, Fernando Martins

    2016-01-01

    According to the 36-Item Short Form Health Survey questionnaire developers, a global measure of health-related quality of life such as the "SF-36 Total/Global/Overall Score" cannot be generated from the questionnaire. However, studies keep on reporting such measure. This study aimed to evaluate the frequency and to describe some characteristics of articles reporting the SF-36 Total/Global/Overall Score in the scientific literature. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses method was adapted to a scoping review. We performed searches in PubMed, Web of Science, SCOPUS, BVS, and Cochrane Library databases for articles using such scores. We found 172 articles published between 1997 and 2015; 110 (64.0%) of them were published from 2010 onwards; 30.0% appeared in journals with Impact Factor 3.00 or greater. Overall, 129 (75.0%) out of the 172 studies did not specify the method for calculating the "SF-36 Total Score"; 13 studies did not specify their methods but referred to the SF-36 developers' studies or others; and 30 articles used different strategies for calculating such score, the most frequent being arithmetic averaging of the eight SF-36 domains scores. We concluded that the "SF-36 Total/Global/Overall Score" has been increasingly reported in the scientific literature. Researchers should be aware of this procedure and of its possible impacts upon human health.

  17. A polynomial time algorithm for checking regularity of totally normed process algebra

    NARCIS (Netherlands)

    Yang, F.; Huang, H.

    2015-01-01

    A polynomial algorithm for the regularity problem of weak and branching bisimilarity on totally normed process algebra (PA) processes is given. Its time complexity is O(n 3 +mn) O(n3+mn), where n is the number of transition rules and m is the maximal length of the rules. The algorithm works for

  18. GPU Based N-Gram String Matching Algorithm with Score Table Approach for String Searching in Many Documents

    Science.gov (United States)

    Srinivasa, K. G.; Shree Devi, B. N.

    2017-10-01

    String searching in documents has become a tedious task with the evolution of Big Data. Generation of large data sets demand for a high performance search algorithm in areas such as text mining, information retrieval and many others. The popularity of GPU's for general purpose computing has been increasing for various applications. Therefore it is of great interest to exploit the thread feature of a GPU to provide a high performance search algorithm. This paper proposes an optimized new approach to N-gram model for string search in a number of lengthy documents and its GPU implementation. The algorithm exploits GPGPUs for searching strings in many documents employing character level N-gram matching with parallel Score Table approach and search using CUDA API. The new approach of Score table used for frequency storage of N-grams in a document, makes the search independent of the document's length and allows faster access to the frequency values, thus decreasing the search complexity. The extensive thread feature in a GPU has been exploited to enable parallel pre-processing of trigrams in a document for Score Table creation and parallel search in huge number of documents, thus speeding up the whole search process even for a large pattern size. Experiments were carried out for many documents of varied length and search strings from the standard Lorem Ipsum text on NVIDIA's GeForce GT 540M GPU with 96 cores. Results prove that the parallel approach for Score Table creation and searching gives a good speed up than the same approach executed serially.

  19. Total hip arthroplasty outcomes assessment using functional and radiographic scores to compare canine systems.

    Science.gov (United States)

    Iwata, D; Broun, H C; Black, A P; Preston, C A; Anderson, G I

    2008-01-01

    A retrospective multi-centre study was carried out in order to compare outcomes between cemented and uncemented total hip arthoplasties (THA). A quantitative orthopaedic outcome assessment scoring system was devised in order to relate functional outcome to a numerical score, to allow comparison between treatments and amongst centres. The system combined a radiographic score and a clinical score. Lower scores reflect better outcomes than higher scores. Consecutive cases of THA were included from two specialist practices between July 2002 and December 2005. The study included 46 THA patients (22 uncemented THA followed for 8.3 +/- 4.7M and 24 cemented THA for 26.0 +/- 15.7M) with a mean age of 4.4 +/- 3.3 years at surgery. Multi-variable linear and logistical regression analyses were performed with adjustments for age at surgery, surgeon, follow-up time, uni- versus bilateral disease, gender and body weight. The differences between treatment groups in terms of functional scores or total scores were not significant (p > 0.05). Radiographic scores were different between treatment groups. However, these scores were usually assessed within two months of surgery and proved unreliable predictors of functional outcome (p > 0.05). The findings reflect relatively short-term follow-up, especially for the uncemented group, and do not include clinician-derived measures, such as goniometry and thigh circumference. Longer-term follow-up for the radiographic assessments is essential. A prospective study including the clinician-derived outcomes needs to be performed in order to validate the outcome instrument in its modified form.

  20. Observations on muscle activity in REM sleep behavior disorder assessed with a semi-automated scoring algorithm

    DEFF Research Database (Denmark)

    Jeppesen, Jesper; Otto, Marit; Frederiksen, Yoon

    2018-01-01

    OBJECTIVES: Rapid eye movement (REM) sleep behavior disorder (RBD) is defined by dream enactment due to a failure of normal muscle atonia. Visual assessment of this muscle activity is time consuming and rater-dependent. METHODS: An EMG computer algorithm for scoring 'tonic', 'phasic' and 'any......' submental muscle activity during REM sleep was evaluated compared with human visual ratings. Subsequently, 52 subjects were analyzed with the algorithm. Duration and maximal amplitude of muscle activity, and self-awareness of RBD symptoms were assessed. RESULTS: The computer algorithm showed high congruency...... sleep without atonia. CONCLUSIONS: Our proposed algorithm was able to detect and rate REM sleep without atonia allowing identification of RBD. Increased duration and amplitude of muscle activity bouts were characteristics of RBD. Quantification of REM sleep without atonia represents a marker of RBD...

  1. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    Science.gov (United States)

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  2. WebScore: An Effective Page Scoring Approach for Uncertain Web Social Networks

    Directory of Open Access Journals (Sweden)

    Shaojie Qiao

    2011-10-01

    Full Text Available To effectively score pages with uncertainty in web social networks, we first proposed a new concept called transition probability matrix and formally defined the uncertainty in web social networks. Second, we proposed a hybrid page scoring algorithm, called WebScore, based on the PageRank algorithm and three centrality measures including degree, betweenness, and closeness. Particularly,WebScore takes into a full consideration of the uncertainty of web social networks by computing the transition probability from one page to another. The basic idea ofWebScore is to: (1 integrate uncertainty into PageRank in order to accurately rank pages, and (2 apply the centrality measures to calculate the importance of pages in web social networks. In order to verify the performance of WebScore, we developed a web social network analysis system which can partition web pages into distinct groups and score them in an effective fashion. Finally, we conducted extensive experiments on real data and the results show that WebScore is effective at scoring uncertain pages with less time deficiency than PageRank and centrality measures based page scoring algorithms.

  3. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  4. Algorithm for the automatic computation of the modified Anderson-Wilkins acuteness score of ischemia from the pre-hospital ECG in ST-segment elevation myocardial infarction

    DEFF Research Database (Denmark)

    Fakhri, Yama; Sejersten-Ripa, Maria; Schoos, Mikkel Malby

    2017-01-01

    BACKGROUND: The acuteness score (based on the modified Anderson-Wilkins score) estimates the acuteness of ischemia based on ST-segment, Q-wave and T-wave measurements obtained from the electrocardiogram (ECG) in patients with ST Elevation Myocardial Infarction (STEMI). The score (range 1 (least...... the acuteness score. METHODS: We scored 50 pre-hospital ECGs from STEMI patients, manually and by the automated algorithm. We assessed the reliability test between the manual and automated algorithm by interclass correlation coefficient (ICC) and Bland-Altman plot. RESULTS: The ICC was 0.84 (95% CI 0.......72-0.91), PECGs, all within the upper (1.46) and lower (-1.12) limits...

  5. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    Science.gov (United States)

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20

  6. Distribution of Total Depressive Symptoms Scores and Each Depressive Symptom Item in a Sample of Japanese Employees.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Yamada, Hiroshi; Miyake, Hirotsugu; Furukawa, Toshiaki A; Furukaw, Toshiaki A

    2016-01-01

    In a previous study, we reported that the distribution of total depressive symptoms scores according to the Center for Epidemiologic Studies Depression Scale (CES-D) in a general population is stable throughout middle adulthood and follows an exponential pattern except for at the lowest end of the symptom score. Furthermore, the individual distributions of 16 negative symptom items of the CES-D exhibit a common mathematical pattern. To confirm the reproducibility of these findings, we investigated the distribution of total depressive symptoms scores and 16 negative symptom items in a sample of Japanese employees. We analyzed 7624 employees aged 20-59 years who had participated in the Northern Japan Occupational Health Promotion Centers Collaboration Study for Mental Health. Depressive symptoms were assessed using the CES-D. The CES-D contains 20 items, each of which is scored in four grades: "rarely," "some," "much," and "most of the time." The descriptive statistics and frequency curves of the distributions were then compared according to age group. The distribution of total depressive symptoms scores appeared to be stable from 30-59 years. The right tail of the distribution for ages 30-59 years exhibited a linear pattern with a log-normal scale. The distributions of the 16 individual negative symptom items of the CES-D exhibited a common mathematical pattern which displayed different distributions with a boundary at "some." The distributions of the 16 negative symptom items from "some" to "most" followed a linear pattern with a log-normal scale. The distributions of the total depressive symptoms scores and individual negative symptom items in a Japanese occupational setting show the same patterns as those observed in a general population. These results show that the specific mathematical patterns of the distributions of total depressive symptoms scores and individual negative symptom items can be reproduced in an occupational population.

  7. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  8. Heart valve surgery: EuroSCORE vs. EuroSCORE II vs. Society of Thoracic Surgeons score

    Directory of Open Access Journals (Sweden)

    Muhammad Sharoz Rabbani

    2014-12-01

    Full Text Available Background This is a validation study comparing the European System for Cardiac Operative Risk Evaluation (EuroSCORE II with the previous additive (AES and logistic EuroSCORE (LES and the Society of Thoracic Surgeons’ (STS risk prediction algorithm, for patients undergoing valve replacement with or without bypass in Pakistan. Patients and Methods Clinical data of 576 patients undergoing valve replacement surgery between 2006 and 2013 were retrospectively collected and individual expected risks of death were calculated by all four risk prediction algorithms. Performance of these risk algorithms was evaluated in terms of discrimination and calibration. Results There were 28 deaths (4.8% among 576 patients, which was lower than the predicted mortality of 5.16%, 6.96% and 4.94% by AES, LES and EuroSCORE II but was higher than 2.13% predicted by STS scoring system. For single and double valve replacement procedures, EuroSCORE II was the best predictor of mortality with highest Hosmer and Lemmeshow test (H-L p value (0.346 to 0.689 and area under the receiver operating characteristic (ROC curve (0.637 to 0.898. For valve plus concomitant coronary artery bypass grafting (CABG patients actual mortality was 1.88%. STS calculator came out to be the best predictor of mortality for this subgroup with H-L p value (0.480 to 0.884 and ROC (0.657 to 0.775. Conclusions For Pakistani population EuroSCORE II is an accurate predictor for individual operative risk in patients undergoing isolated valve surgery, whereas STS performs better in the valve plus CABG group.

  9. Propensity scores-potential outcomes framework to incorporate severity probabilities in the highway safety manual crash prediction algorithm.

    Science.gov (United States)

    Sasidharan, Lekshmi; Donnell, Eric T

    2014-10-01

    Accurate estimation of the expected number of crashes at different severity levels for entities with and without countermeasures plays a vital role in selecting countermeasures in the framework of the safety management process. The current practice is to use the American Association of State Highway and Transportation Officials' Highway Safety Manual crash prediction algorithms, which combine safety performance functions and crash modification factors, to estimate the effects of safety countermeasures on different highway and street facility types. Many of these crash prediction algorithms are based solely on crash frequency, or assume that severity outcomes are unchanged when planning for, or implementing, safety countermeasures. Failing to account for the uncertainty associated with crash severity outcomes, and assuming crash severity distributions remain unchanged in safety performance evaluations, limits the utility of the Highway Safety Manual crash prediction algorithms in assessing the effect of safety countermeasures on crash severity. This study demonstrates the application of a propensity scores-potential outcomes framework to estimate the probability distribution for the occurrence of different crash severity levels by accounting for the uncertainties associated with them. The probability of fatal and severe injury crash occurrence at lighted and unlighted intersections is estimated in this paper using data from Minnesota. The results show that the expected probability of occurrence of fatal and severe injury crashes at a lighted intersection was 1 in 35 crashes and the estimated risk ratio indicates that the respective probabilities at an unlighted intersection was 1.14 times higher compared to lighted intersections. The results from the potential outcomes-propensity scores framework are compared to results obtained from traditional binary logit models, without application of propensity scores matching. Traditional binary logit analysis suggests that

  10. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  11. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    Directory of Open Access Journals (Sweden)

    Shinichiro Tomitaka

    2016-10-01

    Full Text Available Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items. The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an

  12. Towards a contemporary, comprehensive scoring system for determining technical outcomes of hybrid percutaneous chronic total occlusion treatment: The RECHARGE score.

    Science.gov (United States)

    Maeremans, Joren; Spratt, James C; Knaapen, Paul; Walsh, Simon; Agostoni, Pierfrancesco; Wilson, William; Avran, Alexandre; Faurie, Benjamin; Bressollette, Erwan; Kayaert, Peter; Bagnall, Alan J; Smith, Dave; McEntegart, Margaret B; Smith, William H T; Kelly, Paul; Irving, John; Smith, Elliot J; Strange, Julian W; Dens, Jo

    2018-02-01

    This study sought to create a contemporary scoring tool to predict technical outcomes of chronic total occlusion (CTO) percutaneous coronary intervention (PCI) from patients treated by hybrid operators with differing experience levels. Current scoring systems need regular updating to cope with the positive evolutions regarding materials, techniques, and outcomes, while at the same time being applicable for a broad range of operators. Clinical and angiographic characteristics from 880 CTO-PCIs included in the REgistry of CrossBoss and Hybrid procedures in FrAnce, the NetheRlands, BelGium and UnitEd Kingdom (RECHARGE) were analyzed by using a derivation and validation set (2:1 ratio). Variables significantly associated with technical failure in the multivariable analysis were incorporated in the score. Subsequently, the discriminatory capacity was assessed and the validation set was used to compare with the J-CTO score and PROGRESS scores. Technical success in the derivation and validation sets was 83% and 85%, respectively. Multivariate analysis identified six parameters associated with technical failure: blunt stump (beta coefficient (b) = 1.014); calcification (b = 0.908); tortuosity ≥45° (b = 0.964); lesion length 20 mm (b = 0.556); diseased distal landing zone (b = 0.794), and previous bypass graft on CTO vessel (b = 0.833). Score variables remained significant after bootstrapping. The RECHARGE score showed better discriminatory capacity in both sets (area-under-the-curve (AUC) = 0.783 and 0.711), compared to the J-CTO (AUC = 0.676) and PROGRESS (AUC = 0.608) scores. The RECHARGE score is a novel, easy-to-use tool for assessing the risk for technical failure in hybrid CTO-PCI and has the potential to perform well for a broad community of operators. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. A Fast Alternating Minimization Algorithm for Nonlocal Vectorial Total Variational Multichannel Image Denoising

    Directory of Open Access Journals (Sweden)

    Rubing Xi

    2014-01-01

    Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.

  14. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  15. Total Cerebral Small Vessel Disease MRI Score Is Associated With Cognitive Decline In Executive Function In Patients With Hypertension

    Directory of Open Access Journals (Sweden)

    Renske Uiterwijk

    2016-12-01

    Full Text Available Objectives: Hypertension is a major risk factor for white matter hyperintensities, lacunes, cerebral microbleeds and perivascular spaces, which are MRI markers of cerebral small vessel disease (SVD. Studies have shown associations between these individual MRI markers and cognitive functioning and decline. Recently, a total SVD score was proposed in which the different MRI markers were combined into one measure of SVD, to capture total SVD-related brain damage. We investigated if this SVD score was associated with cognitive decline over 4 years in patients with hypertension. Methods: In this longitudinal cohort study, 130 hypertensive patients (91 patients with uncomplicated hypertension and 39 hypertensive patients with a lacunar stroke were included. They underwent a neuropsychological assessment at baseline and after 4 years. The presence of white matter hyperintensities, lacunes, cerebral microbleeds, and perivascular spaces were rated on baseline MRI. Presence of each individual marker was added to calculate the total SVD score (range 0-4 in each patient. Results: Uncorrected linear regression analyses showed associations between SVD score and decline in overall cognition (p=0.017, executive functioning (p<0.001 and information processing speed (p=0.037, but not with memory (p=0.911. The association between SVD score and decline in overall cognition and executive function remained significant after adjustment for age, sex, education, anxiety and depression score, potential vascular risk factors, patient group and baseline cognitive performance.Conclusions: Our study shows that a total SVD score can predict cognitive decline, specifically in executive function, over 4 years in hypertensive patients. This emphasizes the importance of considering total brain damage due to SVD.

  16. Research on prediction of agricultural machinery total power based on grey model optimized by genetic algorithm

    Science.gov (United States)

    Xie, Yan; Li, Mu; Zhou, Jin; Zheng, Chang-zheng

    2009-07-01

    Agricultural machinery total power is an important index to reflex and evaluate the level of agricultural mechanization. It is the power source of agricultural production, and is the main factors to enhance the comprehensive agricultural production capacity expand production scale and increase the income of the farmers. Its demand is affected by natural, economic, technological and social and other "grey" factors. Therefore, grey system theory can be used to analyze the development of agricultural machinery total power. A method based on genetic algorithm optimizing grey modeling process is introduced in this paper. This method makes full use of the advantages of the grey prediction model and characteristics of genetic algorithm to find global optimization. So the prediction model is more accurate. According to data from a province, the GM (1, 1) model for predicting agricultural machinery total power was given based on the grey system theories and genetic algorithm. The result indicates that the model can be used as agricultural machinery total power an effective tool for prediction.

  17. Reliability and validation of the Dutch Achilles tendon Total Rupture Score.

    Science.gov (United States)

    Opdam, K T M; Zwiers, R; Wiegerinck, J I; Kleipool, A E B; Haverlag, R; Goslings, J C; van Dijk, C N

    2018-03-01

    Patient-reported outcome measures (PROMs) have become a cornerstone for the evaluation of the effectiveness of treatment. The Achilles tendon Total Rupture Score (ATRS) is a PROM for outcome and assessment of an Achilles tendon rupture. The aim of this study was to translate the ATRS to Dutch and evaluate its reliability and validity in the Dutch population. A forward-backward translation procedure was performed according to the guidelines of cross-cultural adaptation process. The Dutch ATRS was evaluated for reliability and validity in patients treated for a total Achilles tendon rupture from 1 January 2012 to 31 December 2014 in one teaching hospital and one academic hospital. Reliability was assessed by the intraclass correlation coefficients (ICC), Cronbach's alpha and minimal detectable change (MDC). We assessed construct validity by calculation of Spearman's rho correlation coefficient with domains of the Foot and Ankle Outcome Score (FAOS), Victorian Institute of Sports Assessment-Achilles questionnaire (VISA-A) and Numeric Rating Scale (NRS) for pain in rest and during running. The Dutch ATRS had a good test-retest reliability (ICC = 0.852) and a high internal consistency (Cronbach's alpha = 0.96). MDC was 30.2 at individual level and 3.5 at group level. Construct validity was supported by 75 % of the hypothesized correlations. The Dutch ATRS had a strong correlation with NRS for pain during running (r = -0.746) and all the five subscales of the Dutch FAOS (r = 0.724-0.867). There was a moderate correlation with the VISA-A-NL (r = 0.691) and NRS for pain in rest (r = -0.580). The Dutch ATRS shows an adequate reliability and validity and can be used in the Dutch population for measuring the outcome of treatment of a total Achilles tendon rupture and for research purposes. Diagnostic study, Level I.

  18. Optical Algorithms at Satellite Wavelengths for Total Suspended Matter in Tropical Coastal Waters

    Directory of Open Access Journals (Sweden)

    Alain Muñoz-Caravaca

    2008-07-01

    Full Text Available Is it possible to derive accurately Total Suspended Matter concentration or its proxy, turbidity, from remote sensing data in tropical coastal lagoon waters? To investigate this question, hyperspectral remote sensing reflectance, turbidity and chlorophyll pigment concentration were measured in three coral reef lagoons. The three sites enabled us to get data over very diverse environments: oligotrophic and sediment-poor waters in the southwest lagoon of New Caledonia, eutrophic waters in the Cienfuegos Bay (Cuba, and sediment-rich waters in the Laucala Bay (Fiji. In this paper, optical algorithms for turbidity are presented per site based on 113 stations in New Caledonia, 24 stations in Cuba and 56 stations in Fiji. Empirical algorithms are tested at satellite wavebands useful to coastal applications. Global algorithms are also derived for the merged data set (193 stations. The performances of global and local regression algorithms are compared. The best one-band algorithms on all the measurements are obtained at 681 nm using either a polynomial or a power model. The best two-band algorithms are obtained with R412/R620, R443/R670 and R510/R681. Two three-band algorithms based on Rrs620.Rrs681/Rrs412 and Rrs620.Rrs681/Rrs510 also give fair regression statistics. Finally, we propose a global algorithm based on one or three bands: turbidity is first calculated from Rrs681 and then, if < 1 FTU, it is recalculated using an algorithm based on Rrs620.Rrs681/Rrs412. On our data set, this algorithm is suitable for the 0.2-25 FTU turbidity range and for the three sites sampled (mean bias: 3.6 %, rms: 35%, mean quadratic error: 1.4 FTU. This shows that defining global empirical turbidity algorithms in tropical coastal waters is at reach.

  19. Optical Algorithms at Satellite Wavelengths for Total Suspended Matter in Tropical Coastal Waters

    Science.gov (United States)

    Ouillon, Sylvain; Douillet, Pascal; Petrenko, Anne; Neveux, Jacques; Dupouy, Cécile; Froidefond, Jean-Marie; Andréfouët, Serge; Muñoz-Caravaca, Alain

    2008-01-01

    Is it possible to derive accurately Total Suspended Matter concentration or its proxy, turbidity, from remote sensing data in tropical coastal lagoon waters? To investigate this question, hyperspectral remote sensing reflectance, turbidity and chlorophyll pigment concentration were measured in three coral reef lagoons. The three sites enabled us to get data over very diverse environments: oligotrophic and sediment-poor waters in the southwest lagoon of New Caledonia, eutrophic waters in the Cienfuegos Bay (Cuba), and sediment-rich waters in the Laucala Bay (Fiji). In this paper, optical algorithms for turbidity are presented per site based on 113 stations in New Caledonia, 24 stations in Cuba and 56 stations in Fiji. Empirical algorithms are tested at satellite wavebands useful to coastal applications. Global algorithms are also derived for the merged data set (193 stations). The performances of global and local regression algorithms are compared. The best one-band algorithms on all the measurements are obtained at 681 nm using either a polynomial or a power model. The best two-band algorithms are obtained with R412/R620, R443/R670 and R510/R681. Two three-band algorithms based on Rrs620.Rrs681/Rrs412 and Rrs620.Rrs681/Rrs510 also give fair regression statistics. Finally, we propose a global algorithm based on one or three bands: turbidity is first calculated from Rrs681 and then, if < 1 FTU, it is recalculated using an algorithm based on Rrs620.Rrs681/Rrs412. On our data set, this algorithm is suitable for the 0.2-25 FTU turbidity range and for the three sites sampled (mean bias: 3.6 %, rms: 35%, mean quadratic error: 1.4 FTU). This shows that defining global empirical turbidity algorithms in tropical coastal waters is at reach. PMID:27879929

  20. A Novel Risk Score in Predicting Failure or Success for Antegrade Approach to Percutaneous Coronary Intervention of Chronic Total Occlusion: Antegrade CTO Score.

    Science.gov (United States)

    Namazi, Mohammad Hasan; Serati, Ali Reza; Vakili, Hosein; Safi, Morteza; Parsa, Saeed Ali Pour; Saadat, Habibollah; Taherkhani, Maryam; Emami, Sepideh; Pedari, Shamseddin; Vatanparast, Masoomeh; Movahed, Mohammad Reza

    2017-06-01

    Total occlusion of a coronary artery for more than 3 months is defined as chronic total occlusion (CTO). The goal of this study was to develop a risk score in predicting failure or success during attempted percutaneous coronary intervention (PCI) of CTO lesions using antegrade approach. This study was based on retrospective analyses of clinical and angiographic characteristics of CTO lesions that were assessed between February 2012 and February 2014. Success rate was defined as passing through occlusion with successful stent deployment using an antegrade approach. A total of 188 patients were studied. Mean ± SD age was 59 ± 9 years. Failure rate was 33%. In a stepwise multivariate regression analysis, bridging collaterals (OR = 6.7, CI = 1.97-23.17, score = 2), absence of stump (OR = 5.8, CI = 1.95-17.9, score = 2), presence of calcification (OR = 3.21, CI = 1.46-7.07, score = 1), presence of bending (OR = 2.8, CI = 1.28-6.10, score = 1), presence of near side branch (OR = 2.7, CI = 1.08-6.57, score = 1), and absence of retrograde filling (OR = 2.5, CI = 1.03-6.17, score = 1) were independent predictors of PCI failure. A score of 7 or more was associated with 100% failure rate whereas a score of 2 or less was associated with over 80% success rate. Most factors associated with failure of CTO-PCI are related to lesion characteristics. A new risk score (range 0-8) is developed to predict CTO-PCI success or failure rate during antegrade approach as a guide before attempting PCI of CTO lesions.

  1. Comparison between the Harris- and Oxford Hip Score to evaluate outcomes one-year after total hip arthroplasty

    NARCIS (Netherlands)

    Weel, Hanneke; Lindeboom, Robert; Kuipers, Sander E.; Vervest, Ton M. J. S.

    2017-01-01

    Harris Hip Score (HHS) is a surgeon administered measurement for assessing hip function before and after total hip arthroplasties (THA). Patient reported outcome measurements (PROMs) such as the Oxford Hip Score (OHS) are increasingly used. HHS was compaired to the OHS assessing whether the HHS can

  2. Reverse-total shoulder arthroplasty cost-effectiveness: A quality-adjusted life years comparison with total hip arthroplasty.

    Science.gov (United States)

    Bachman, Daniel; Nyland, John; Krupp, Ryan

    2016-02-18

    To compare reverse-total shoulder arthroplasty (RSA) cost-effectiveness with total hip arthroplasty cost-effectiveness. This study used a stochastic model and decision-making algorithm to compare the cost-effectiveness of RSA and total hip arthroplasty. Fifteen patients underwent pre-operative, and 3, 6, and 12 mo post-operative clinical examinations and Short Form-36 Health Survey completion. Short form-36 Health Survey subscale scores were converted to EuroQual Group Five Dimension Health Outcome scores and compared with historical data from age-matched patients who had undergone total hip arthroplasty. Quality-adjusted life year (QALY) improvements based on life expectancies were calculated. The cost/QALY was $3900 for total hip arthroplasty and $11100 for RSA. After adjusting the model to only include shoulder-specific physical function subscale items, the RSA QALY improved to 2.8 years, and its cost/QALY decreased to $8100. Based on industry accepted standards, cost/QALY estimates supported both RSA and total hip arthroplasty cost-effectiveness. Although total hip arthroplasty remains the quality of life improvement "gold standard" among arthroplasty procedures, cost/QALY estimates identified in this study support the growing use of RSA to improve patient quality of life.

  3. Automatic Sleep Scoring in Normals and in Individuals with Neurodegenerative Disorders According to New International Sleep Scoring Criteria

    DEFF Research Database (Denmark)

    Jensen, Peter S.; Sørensen, Helge Bjarup Dissing; Leonthin, Helle

    2010-01-01

    The aim of this study was to develop a fully automatic sleep scoring algorithm on the basis of a reproduction of new international sleep scoring criteria from the American Academy of Sleep Medicine. A biomedical signal processing algorithm was developed, allowing for automatic sleep depth....... Based on an observed reliability of the manual scorer of 92.5% (Cohen's Kappa: 0.87) in the normal group and 85.3% (Cohen's Kappa: 0.73) in the abnormal group, this study concluded that although the developed algorithm was capable of scoring normal sleep with an accuracy around the manual interscorer...... reliability, it failed in accurately scoring abnormal sleep as encountered for the Parkinson disease/multiple system atrophy patients....

  4. Automatic sleep scoring in normals and in individuals with neurodegenerative disorders according to new international sleep scoring criteria

    DEFF Research Database (Denmark)

    Jensen, Peter S; Sorensen, Helge B D; Jennum, Poul

    2010-01-01

    The aim of this study was to develop a fully automatic sleep scoring algorithm on the basis of a reproduction of new international sleep scoring criteria from the American Academy of Sleep Medicine. A biomedical signal processing algorithm was developed, allowing for automatic sleep depth....... Based on an observed reliability of the manual scorer of 92.5% (Cohen's Kappa: 0.87) in the normal group and 85.3% (Cohen's Kappa: 0.73) in the abnormal group, this study concluded that although the developed algorithm was capable of scoring normal sleep with an accuracy around the manual interscorer...... reliability, it failed in accurately scoring abnormal sleep as encountered for the Parkinson disease/multiple system atrophy patients....

  5. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    OpenAIRE

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    [Background]Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item th...

  6. Optical Algorithms at Satellite Wavelengths for Total Suspended Matter in Tropical Coastal Waters.

    Science.gov (United States)

    Ouillon, Sylvain; Douillet, Pascal; Petrenko, Anne; Neveux, Jacques; Dupouy, Cécile; Froidefond, Jean-Marie; Andréfouët, Serge; Muñoz-Caravaca, Alain

    2008-07-10

    Is it possible to derive accurately Total Suspended Matter concentration or its proxy, turbidity, from remote sensing data in tropical coastal lagoon waters? To investigate this question, hyperspectral remote sensing reflectance, turbidity and chlorophyll pigment concentration were measured in three coral reef lagoons. The three sites enabled us to get data over very diverse environments: oligotrophic and sediment-poor waters in the southwest lagoon of New Caledonia, eutrophic waters in the Cienfuegos Bay (Cuba), and sediment-rich waters in the Laucala Bay (Fiji). In this paper, optical algorithms for turbidity are presented per site based on 113 stations in New Caledonia, 24 stations in Cuba and 56 stations in Fiji. Empirical algorithms are tested at satellite wavebands useful to coastal applications. Global algorithms are also derived for the merged data set (193 stations). The performances of global and local regression algorithms are compared. The best one-band algorithms on all the measurements are obtained at 681 nm using either a polynomial or a power model. The best two-band algorithms are obtained with R412/R620, R443/R670 and R510/R681. Two three-band algorithms based on Rrs620.Rrs681/Rrs412 and Rrs620.Rrs681/Rrs510 also give fair regression statistics. Finally, we propose a global algorithm based on one or three bands: turbidity is first calculated from Rrs681 and then, if turbidity range and for the three sites sampled (mean bias: 3.6 %, rms: 35%, mean quadratic error: 1.4 FTU). This shows that defining global empirical turbidity algorithms in tropical coastal waters is at reach.

  7. Negative emotions affect postoperative scores for evaluating functional knee recovery and quality of life after total knee replacement

    Directory of Open Access Journals (Sweden)

    A. Qi

    2016-01-01

    Full Text Available This study aimed to determine whether psychological factors affect health-related quality of life (HRQL and recovery of knee function in total knee replacement (TKR patients. A total of 119 TKR patients (male: 38; female: 81 completed the Beck Anxiety Inventory (BAI, Beck Depression Inventory (BDI, State Trait Anxiety Inventory (STAI, Eysenck Personality Questionnaire-revised (EPQR-S, Knee Society Score (KSS, and HRQL (SF-36. At 1 and 6 months after surgery, anxiety, depression, and KSS scores in TKR patients were significantly better compared with those preoperatively (P<0.05. SF-36 scores at the sixth month after surgery were significantly improved compared with preoperative scores (P<0.001. Preoperative Physical Component Summary Scale (PCS and Mental Component Summary Scale (MCS scores were negatively associated with extraversion (E score (B=-0.986 and -0.967, respectively, both P<0.05. Postoperative PCS and State Anxiety Inventory (SAI scores were negatively associated with neuroticism (N score; B=-0.137 and -0.991, respectively, both P<0.05. Postoperative MCS, SAI, Trait Anxiety Inventory (TAI, and BAI scores were also negatively associated with the N score (B=-0.367, -0.107, -0.281, and -0.851, respectively, all P<0.05. The KSS function score at the sixth month after surgery was negatively associated with TAI and N scores (B=-0.315 and -0.532, respectively, both P<0.05, but positively associated with the E score (B=0.215, P<0.05. The postoperative KSS joint score was positively associated with postoperative PCS (B=0.356, P<0.05. In conclusion, for TKR patients, the scores used for evaluating recovery of knee function and HRQL after 6 months are inversely associated with the presence of negative emotions.

  8. Automatic sleep scoring in normals and in individuals with neurodegenerative disorders according to new international sleep scoring criteria

    DEFF Research Database (Denmark)

    Jensen, Peter S.; Sørensen, Helge Bjarup Dissing; Jennum, P. J.

    2010-01-01

    Medicine (AASM). Methods: A biomedical signal processing algorithm was developed, allowing for automatic sleep depth quantification of routine polysomnographic (PSG) recordings through feature extraction, supervised probabilistic Bayesian classification, and heuristic rule-based smoothing. The performance......Introduction: Reliable polysomnographic classification is the basis for evaluation of sleep disorders in neurological diseases. Aim: To develop a fully automatic sleep scoring algorithm on the basis of a reproduction of new international sleep scoring criteria from the American Academy of Sleep....... Conclusion: The developed algorithm was capable of scoring normal sleep with an accuracy around the manual inter-scorer reliability, it failed in accurately scoring abnormal sleep as encountered for the PD/MSA patients, which is due to the abnormal micro- and macrostructure pattern in these patients....

  9. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    Science.gov (United States)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  10. SENSITIVITY AND SPECIFICITY OF INDIVIDUAL BERG BALANCE ITEMS COMPARED WITH THE TOTAL SCORE TO PREDICT FALLS IN COMMUNITY DWELLING ELDERLY INDIVIDUALS

    Directory of Open Access Journals (Sweden)

    Hazel Denzil Dias

    2014-09-01

    Full Text Available Background: Falls are a major problem in the elderly leading to increased morbidity and mortality in this population. Scores from objective clinical measures of balance have frequently been associated with falls in older adults. The Berg Balance Score (BBS which is a frequently used scale to test balance impairments in the elderly ,takes time to perform and has been found to have scoring inconsistencies. The purpose was to determine if individual items or a group of BBS items would have better accuracy than the total BBS in classifying community dwelling elderly individuals according to fall history. Method: 60 community dwelling elderly individuals were chosen based on a history of falls in this cross sectional study. Each BBS item was dichotomized at three points along the scoring scale of 0 – 4: between scores of 1 and 2, 2 and 3, and 3 and 4. Sensitivity (Sn, specificity (Sp, and positive (+LR and negative (-LR likelihood ratios were calculated for all items for each scoring dichotomy based on their accuracy in classifying subjects with a history of multiple falls. These findings were compared with the total BBS score where the cut-off score was derived from receiver operating characteristic curve analysis. Results: On analysing a combination of BBS items, B9 and B11 were found to have the best sensitivity and specificity when considered together. However the area under the curve of these items was 0.799 which did not match that of the total score (AUC= 0.837. A, combination of 4 BBS items - B9 B11 B12 and B13 also had good Sn and Sp but the AUC was 0.815. The combination with the AUC closest to that of the total score was a combination items B11 and B13. (AUC= 0.824. hence these two items can be used as the best predictor of falls with a cut off of 6.5 The ROC curve of the Total Berg balance Scale scores revealed a cut off score of 48.5. Conclusion: This study showed that combination of items B11 and B13 may be best predictors of falls in

  11. SENSITIVITY AND SPECIFICITY OF INDIVIDUAL BERG BALANCE ITEMS COMPARED WITH THE TOTAL SCORE TO PREDICT FALLS IN COMMUNITY DWELLING ELDERLY INDIVIDUALS

    Directory of Open Access Journals (Sweden)

    Hazel Denzil Dias

    2014-06-01

    Full Text Available Background: Falls are a major problem in the elderly leading to increased morbidity and mortality in this population. Scores from objective clinical measures of balance have frequently been associated with falls in older adults. The Berg Balance Score (BBS which is a frequently used scale to test balance impairments in the elderly ,takes time to perform and has been found to have scoring inconsistencies. The purpose was to determine if individual items or a group of BBS items would have better accuracy than the total BBS in classifying community dwelling elderly individuals according to fall history. Method: 60 community dwelling elderly individuals were chosen based on a history of falls in this cross sectional study. Each BBS item was dichotomized at three points along the scoring scale of 0 – 4: between scores of 1 and 2, 2 and 3, and 3 and 4. Sensitivity (Sn, specificity (Sp, and positive (+LR and negative (-LR likelihood ratios were calculated for all items for each scoring dichotomy based on their accuracy in classifying subjects with a history of multiple falls. These findings were compared with the total BBS score where the cut-off score was derived from receiver operating characteristic curve analysis. Results: On analysing a combination of BBS items, B9 and B11 were found to have the best sensitivity and specificity when considered together. However the area under the curve of these items was 0.799 which did not match that of the total score (AUC= 0.837. A, combination of 4 BBS items - B9 B11 B12 and B13 also had good Sn and Sp but the AUC was 0.815. The combination with the AUC closest to that of the total score was a combination items B11 and B13. (AUC= 0.824. hence these two items can be used as the best predictor of falls with a cut off of 6.5 The ROC curve of the Total Berg balance Scale scores revealed a cut off score of 48.5. Conclusion: This study showed that combination of items B11 and B13 may be best predictors of falls in

  12. Adaptive testing with equated number-correct scoring

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1999-01-01

    A constrained CAT algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived

  13. Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model

    Directory of Open Access Journals (Sweden)

    WANG Bin

    2015-06-01

    Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.

  14. Fast index based algorithms and software for matching position specific scoring matrices

    Directory of Open Access Journals (Sweden)

    Homann Robert

    2006-08-01

    Full Text Available Abstract Background In biological sequence analysis, position specific scoring matrices (PSSMs are widely used to represent sequence motifs in nucleotide as well as amino acid sequences. Searching with PSSMs in complete genomes or large sequence databases is a common, but computationally expensive task. Results We present a new non-heuristic algorithm, called ESAsearch, to efficiently find matches of PSSMs in large databases. Our approach preprocesses the search space, e.g., a complete genome or a set of protein sequences, and builds an enhanced suffix array that is stored on file. This allows the searching of a database with a PSSM in sublinear expected time. Since ESAsearch benefits from small alphabets, we present a variant operating on sequences recoded according to a reduced alphabet. We also address the problem of non-comparable PSSM-scores by developing a method which allows the efficient computation of a matrix similarity threshold for a PSSM, given an E-value or a p-value. Our method is based on dynamic programming and, in contrast to other methods, it employs lazy evaluation of the dynamic programming matrix. We evaluated algorithm ESAsearch with nucleotide PSSMs and with amino acid PSSMs. Compared to the best previous methods, ESAsearch shows speedups of a factor between 17 and 275 for nucleotide PSSMs, and speedups up to factor 1.8 for amino acid PSSMs. Comparisons with the most widely used programs even show speedups by a factor of at least 3.8. Alphabet reduction yields an additional speedup factor of 2 on amino acid sequences compared to results achieved with the 20 symbol standard alphabet. The lazy evaluation method is also much faster than previous methods, with speedups of a factor between 3 and 330. Conclusion Our analysis of ESAsearch reveals sublinear runtime in the expected case, and linear runtime in the worst case for sequences not shorter than |A MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92Aae

  15. Waste Load Allocation Based on Total Maximum Daily Load Approach Using the Charged System Search (CSS Algorithm

    Directory of Open Access Journals (Sweden)

    Elham Faraji

    2016-03-01

    Full Text Available In this research, the capability of a charged system search algorithm (CSS in handling water management optimization problems is investigated. First, two complex mathematical problems are solved by CSS and the results are compared with those obtained from other metaheuristic algorithms. In the last step, the optimization model developed by the CSS algorithm is applied to the waste load allocation in rivers based on the total maximum daily load (TMDL concept. The results are presented in Tables and Figures for easy comparison. The study indicates the superiority of the CSS algorithm in terms of its speed and performance over the other metaheuristic algorithms while its precision in water management optimization problems is verified.

  16. Hospital Value-Based Purchasing (HVBP) – Total Performance Score

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of hospitals participating in the Hospital VBP Program and their Clinical Process of Care domain scores, Patient Experience of Care dimension scores, and...

  17. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  18. Validity and Reliability of the Achilles Tendon Total Rupture Score

    DEFF Research Database (Denmark)

    Ganestam, Ann; Barfod, Kristoffer; Klit, Jakob

    2013-01-01

    study was to validate a Danish translation of the ATRS. The ATRS was translated into Danish according to internationally adopted standards. Of 142 patients, 90 with previous rupture of the Achilles tendon participated in the validity study and 52 in the reliability study. The ATRS showed moderately......The best treatment of acute Achilles tendon rupture remains debated. Patient-reported outcome measures have become cornerstones in treatment evaluations. The Achilles tendon total rupture score (ATRS) has been developed for this purpose but requires additional validation. The purpose of the present...... = .07). The limits of agreement were ±18.53. A strong correlation was found between test and retest (intercorrelation coefficient .908); the standard error of measurement was 6.7, and the minimal detectable change was 18.5. The Danish version of the ATRS showed moderately strong criterion validity...

  19. A locally adapted functional outcome measurement score for total ...

    African Journals Online (AJOL)

    ... in Europe or North America and seem not optimally suited for a general West ... We introduce a cross-cultural adaptation of the Lequesne index as a new score. ... Keywords: THR, Hip, Africa, Functional score, Hip replacement, Arthroscopy ...

  20. Android Malware Classification Using K-Means Clustering Algorithm

    Science.gov (United States)

    Hamid, Isredza Rahmi A.; Syafiqah Khalid, Nur; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Chai Wen, Chuah

    2017-08-01

    Malware was designed to gain access or damage a computer system without user notice. Besides, attacker exploits malware to commit crime or fraud. This paper proposed Android malware classification approach based on K-Means clustering algorithm. We evaluate the proposed model in terms of accuracy using machine learning algorithms. Two datasets were selected to demonstrate the practicing of K-Means clustering algorithms that are Virus Total and Malgenome dataset. We classify the Android malware into three clusters which are ransomware, scareware and goodware. Nine features were considered for each types of dataset such as Lock Detected, Text Detected, Text Score, Encryption Detected, Threat, Porn, Law, Copyright and Moneypak. We used IBM SPSS Statistic software for data classification and WEKA tools to evaluate the built cluster. The proposed K-Means clustering algorithm shows promising result with high accuracy when tested using Random Forest algorithm.

  1. Can the pre-operative Western Ontario and McMaster score predict patient satisfaction following total hip arthroplasty?

    Science.gov (United States)

    Rogers, B A; Alolabi, B; Carrothers, A D; Kreder, H J; Jenkinson, R J

    2015-02-01

    In this study we evaluated whether pre-operative Western Ontario and McMaster Universities (WOMAC) osteoarthritis scores can predict satisfaction following total hip arthroplasty (THA). Prospective data for a cohort of patients undergoing THA from two large academic centres were collected, and pre-operative and one-year post-operative WOMAC scores and a 25-point satisfaction questionnaire were obtained for 446 patients. Satisfaction scores were dichotomised into either improvement or deterioration. Scatter plots and Spearman's rank correlation coefficient were used to describe the association between pre-operative WOMAC and one-year post-operative WOMAC scores and patient satisfaction. Satisfaction was compared using receiver operating characteristic (ROC) analysis against pre-operative, post-operative and δ WOMAC scores. We found no relationship between pre-operative WOMAC scores and one-year post-operative WOMAC or satisfaction scores, with Spearman's rank correlation coefficients of 0.16 and -0.05, respectively. The ROC analysis showed areas under the curve (AUC) of 0.54 (pre-operative WOMAC), 0.67 (post-operative WOMAC) and 0.43 (δ WOMAC), respectively, for an improvement in satisfaction. We conclude that the pre-operative WOMAC score does not predict the post-operative WOMAC score or patient satisfaction after THA, and that WOMAC scores can therefore not be used to prioritise patient care. ©2015 The British Editorial Society of Bone & Joint Surgery.

  2. External validation of the DHAKA score and comparison with the current IMCI algorithm for the assessment of dehydration in children with diarrhoea: a prospective cohort study.

    Science.gov (United States)

    Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Atika, Bita; Rege, Soham; Robertson, Sarah; Schmid, Christopher H; Alam, Nur H

    2016-10-01

    Dehydration due to diarrhoea is a leading cause of child death worldwide, yet no clinical tools for assessing dehydration have been validated in resource-limited settings. The Dehydration: Assessing Kids Accurately (DHAKA) score was derived for assessing dehydration in children with diarrhoea in a low-income country setting. In this study, we aimed to externally validate the DHAKA score in a new population of children and compare its accuracy and reliability to the current Integrated Management of Childhood Illness (IMCI) algorithm. DHAKA was a prospective cohort study done in children younger than 60 months presenting to the International Centre for Diarrhoeal Disease Research, Bangladesh, with acute diarrhoea (defined by WHO as three or more loose stools per day for less than 14 days). Local nurses assessed children and classified their dehydration status using both the DHAKA score and the IMCI algorithm. Serial weights were obtained and dehydration status was established by percentage weight change with rehydration. We did regression analyses to validate the DHAKA score and compared the accuracy and reliability of the DHAKA score and IMCI algorithm with receiver operator characteristic (ROC) curves and the weighted κ statistic. This study was registered with ClinicalTrials.gov, number NCT02007733. Between March 22, 2015, and May 15, 2015, 496 patients were included in our primary analyses. On the basis of our criterion standard, 242 (49%) of 496 children had no dehydration, 184 (37%) of 496 had some dehydration, and 70 (14%) of 496 had severe dehydration. In multivariable regression analyses, each 1-point increase in the DHAKA score predicted an increase of 0·6% in the percentage dehydration of the child and increased the odds of both some and severe dehydration by a factor of 1·4. Both the accuracy and reliability of the DHAKA score were significantly greater than those of the IMCI algorithm. The DHAKA score is the first clinical tool for assessing

  3. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  4. Matching score based face recognition

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2006-01-01

    Accurate face registration is of vital importance to the performance of a face recognition algorithm. We propose a new method: matching score based face registration, which searches for optimal alignment by maximizing the matching score output of a classifier as a function of the different

  5. Validity and reliability of the Achilles tendon total rupture score.

    Science.gov (United States)

    Ganestam, Ann; Barfod, Kristoffer; Klit, Jakob; Troelsen, Anders

    2013-01-01

    The best treatment of acute Achilles tendon rupture remains debated. Patient-reported outcome measures have become cornerstones in treatment evaluations. The Achilles tendon total rupture score (ATRS) has been developed for this purpose but requires additional validation. The purpose of the present study was to validate a Danish translation of the ATRS. The ATRS was translated into Danish according to internationally adopted standards. Of 142 patients, 90 with previous rupture of the Achilles tendon participated in the validity study and 52 in the reliability study. The ATRS showed moderately strong correlations with the physical subscores of the Medical Outcomes Study 36-item Short-Form Health Survey (r = .70 to .75; p questionnaire (r = .71; p validity. For study and follow-up purposes, the ATRS seems reliable for comparisons of groups of patients. Its usability is limited for repeated assessment of individual patients. The development of analysis guidelines would be desirable. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  6. The TOMS V9 Algorithm for OMPS Nadir Mapper Total Ozone: An Enhanced Design That Ensures Data Continuity

    Science.gov (United States)

    Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.

    2015-12-01

    The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.

  7. Comparison of five actigraphy scoring methods with bipolar disorder.

    Science.gov (United States)

    Boudebesse, Carole; Leboyer, Marion; Begley, Amy; Wood, Annette; Miewald, Jean; Hall, Martica; Frank, Ellen; Kupfer, David; Germain, Anne

    2013-01-01

    The goal of this study was to compare 5 actigraphy scoring methods in a sample of 18 remitted patients with bipolar disorder. Actigraphy records were processed using five different scoring methods relying on the sleep diary; the event-marker; the software-provided automatic algorithm; the automatic algorithm supplemented by the event-marker; visual inspection (VI) only. The algorithm and the VI methods differed from the other methods for many actigraphy parameters of interest. Particularly, the algorithm method yielded longer sleep duration, and the VI method yielded shorter sleep latency compared to the other methods. The present findings provide guidance for the selection of signal processing method based on sleep parameters of interest, time-cue sources and availability, and related scoring time costs for the study.

  8. Fall Risk Score at the Time of Discharge Predicts Readmission Following Total Joint Arthroplasty.

    Science.gov (United States)

    Ravi, Bheeshma; Nan, Zhang; Schwartz, Adam J; Clarke, Henry D

    2017-07-01

    Readmission among Medicare recipients is a leading driver of healthcare expenditure. To date, most predictive tools are too coarse for direct clinical application. Our objective in this study is to determine if a pre-existing tool to identify patients at increased risk for inpatient falls, the Hendrich Fall Risk Score, could be used to accurately identify Medicare patients at increased risk for readmission following arthroplasty, regardless of whether the readmission was due to a fall. This study is a retrospective cohort study. We identified 2437 Medicare patients who underwent a primary elective total joint arthroplasty (TJA) of the hip or knee for osteoarthritis between 2011 and 2014. The Hendrich Fall Risk score was recorded for each patient preoperatively and postoperatively. Our main outcome measure was hospital readmission within 30 days of discharge. Of 2437 eligible TJA recipients, there were 226 (9.3%) patients who had a score ≥6. These patients were more likely to have an unplanned readmission (unadjusted odds ratio 2.84, 95% confidence interval 1.70-4.76, P 3 days (49.6% vs 36.6%, P = .0001), and were less likely to be sent home after discharge (20.8% vs 35.8%, P fall risk score after TJA is strongly associated with unplanned readmission. Application of this tool will allow hospitals to identify these patients and plan their discharge. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Semiparametric score level fusion: Gaussian copula approach

    NARCIS (Netherlands)

    Susyanyo, N.; Klaassen, C.A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    Score level fusion is an appealing method for combining multi-algorithms, multi- representations, and multi-modality biometrics due to its simplicity. Often, scores are assumed to be independent, but even for dependent scores, accord- ing to the Neyman-Pearson lemma, the likelihood ratio is the

  10. Validation of use of subsets of teeth when applying the total mouth periodontal score (TMPS) system in dogs.

    Science.gov (United States)

    Harvey, Colin E; Laster, Larry; Shofer, Frances S

    2012-01-01

    A total mouth periodontal score (TMPS) system in dogs has been described previously. Use of buccal and palatal/lingual surfaces of all teeth requires observation and recording of 120 gingivitis scores and 120 periodontitis scores. Although the result is a reliable, repeatable assessment of the extent of periodontal disease in the mouth, observing and recording 240 data points is time-consuming. Using data from a previously reported study of periodontal disease in dogs, correlation analysis was used to determine whether use of any of seven different subsets of teeth can generate TMPS subset gingivitis and periodontitis scores that are highly correlated with TMPS all-site, all-teeth scores. Overall, gingivitis scores were less highly correlated than periodontitis scores. The minimal tooth set with a significant intra-class correlation (> or = 0.9 of means of right and left sides) for both gingivitis scores and attachment loss measurements consisted of the buccal surface of the maxillary third incisor canine, third premolar fourth premolar; and first molar teeth; and, the mandibular canine, third premolar, fourth premolar and first molar teeth on one side (9 teeth, 15 root sites). Use of this subset of teeth, which reduces the number of data points per dog from 240 to 30 for gingivitis and periodontitis at each scoring episode, is recommended when calculating the gingivitis and periodontitis scores using the TMPS system.

  11. Do Press Ganey Scores Correlate With Total Knee Arthroplasty-Specific Outcome Questionnaires in Postsurgical Patients?

    Science.gov (United States)

    Chughtai, Morad; Patel, Nirav K; Gwam, Chukwuweike U; Khlopas, Anton; Bonutti, Peter M; Delanois, Ronald E; Mont, Michael A

    2017-09-01

    The purpose of this study was to assess whether Center for Medicaid and Medicare services-implemented satisfaction (Press Ganey [PG]) survey results correlate with established total knee arthroplasty (TKA) assessment tools. Data from 736 patients who underwent TKA and received a PG survey between November 2009 and January 2015 were analyzed. The PG survey overall hospital rating scores were correlated with standardized validated outcome assessment tools for TKA (Short form-12 and 36 Health Survey; Knee Society Score; Western Ontario and McMaster Universities Arthritis Index; University of California, Los Angeles; and visual analog scale) at a mean follow-up of 1154 days post-TKA. There was no correlation between PG survey overall hospital rating score and the above-mentioned outcome assessment tools. Our study shows that there is no statistically significant relationship between established arthroplasty assessment tools and the PG overall hospital rating. Therefore, PG surveys may not be an appropriate tool to determine reimbursement for orthopedists performing TKAs. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Using Electromagnetic Algorithm for Total Costs of Sub-contractor Optimization in the Cellular Manufacturing Problem

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Shahriari

    2016-12-01

    Full Text Available In this paper, we present a non-linear binary programing for optimizing a specific cost in cellular manufacturing system in a controlled production condition. The system parameters are determined by the continuous distribution functions. The aim of the presented model is to optimize the total cost of imposed sub-contractors to the manufacturing system by determining how to allocate the machines and parts to each seller. In this system, DM could control the occupation level of each machine in the system. For solving the presented model, we used the electromagnetic meta-heuristic algorithm and Taguchi method for determining the optimal algorithm parameters.

  13. Empirical scoring functions for advanced protein-ligand docking with PLANTS.

    Science.gov (United States)

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2009-01-01

    In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.

  14. Use of the Liverpool Elbow Score as a postal questionnaire for the assessment of outcome after total elbow arthroplasty.

    Science.gov (United States)

    Ashmore, Alexander M; Gozzard, Charles; Blewitt, Neil

    2007-01-01

    The Liverpool Elbow Score (LES) is a newly developed, validated elbow-specific score. It consists of a patient-answered questionnaire (PAQ) and a clinical assessment. The purpose of this study was to determine whether the PAQ portion of the LES could be used independently as a postal questionnaire for the assessment of outcome after total elbow arthroplasty and to correlate the LES and the Mayo Elbow Performance Score (MEPS). A series of 51 total elbow replacements were reviewed by postal questionnaire. Patients then attended the clinic for assessment by use of both the LES and the MEPS. There was an excellent response rate to the postal questionnaire (98%), and 44 elbows were available for clinical review. Good correlation was shown between the LES and the MEPS (Spearman correlation coefficient, 0.84; P PAQ portion of the LES and the MEPS (Spearman correlation coefficient, 0.76; P PAQ component and the MEPS, suggesting that outcome assessment is possible by postal questionnaire.

  15. Recent Advancements in Lightning Jump Algorithm Work

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  16. Association between Diet-Quality Scores, Adiposity, Total Cholesterol and Markers of Nutritional Status in European Adults: Findings from the Food4Me Study

    Directory of Open Access Journals (Sweden)

    Rosalind Fallaize

    2018-01-01

    Full Text Available Diet-quality scores (DQS, which are developed across the globe, are used to define adherence to specific eating patterns and have been associated with risk of coronary heart disease and type-II diabetes. We explored the association between five diet-quality scores (Healthy Eating Index, HEI; Alternate Healthy Eating Index, AHEI; MedDietScore, MDS; PREDIMED Mediterranean Diet Score, P-MDS; Dutch Healthy Diet-Index, DHDI and markers of metabolic health (anthropometry, objective physical activity levels (PAL, and dried blood spot total cholesterol (TC, total carotenoids, and omega-3 index in the Food4Me cohort, using regression analysis. Dietary intake was assessed using a validated Food Frequency Questionnaire. Participants (n = 1480 were adults recruited from seven European Union (EU countries. Overall, women had higher HEI and AHEI than men (p < 0.05, and scores varied significantly between countries. For all DQS, higher scores were associated with lower body mass index, lower waist-to-height ratio and waist circumference, and higher total carotenoids and omega-3-index (p trends < 0.05. Higher HEI, AHEI, DHDI, and P-MDS scores were associated with increased daily PAL, moderate and vigorous activity, and reduced sedentary behaviour (p trend < 0.05. We observed no association between DQS and TC. To conclude, higher DQS, which reflect better dietary patterns, were associated with markers of better nutritional status and metabolic health.

  17. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  18. Scoring best-worst data in unbalanced many-item designs, with applications to crowdsourcing semantic judgments.

    Science.gov (United States)

    Hollis, Geoff

    2018-04-01

    Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.

  19. A Hybrid Genetic Algorithm to Minimize Total Tardiness for Unrelated Parallel Machine Scheduling with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Chunfeng Liu

    2013-01-01

    Full Text Available The paper presents a novel hybrid genetic algorithm (HGA for a deterministic scheduling problem where multiple jobs with arbitrary precedence constraints are processed on multiple unrelated parallel machines. The objective is to minimize total tardiness, since delays of the jobs may lead to punishment cost or cancellation of orders by the clients in many situations. A priority rule-based heuristic algorithm, which schedules a prior job on a prior machine according to the priority rule at each iteration, is suggested and embedded to the HGA for initial feasible schedules that can be improved in further stages. Computational experiments are conducted to show that the proposed HGA performs well with respect to accuracy and efficiency of solution for small-sized problems and gets better results than the conventional genetic algorithm within the same runtime for large-sized problems.

  20. A new algorithm for histopathological diagnosis of periprosthetic infection using CD15 focus score and computer program CD15 Quantifier

    Directory of Open Access Journals (Sweden)

    V. Krenn

    2015-01-01

    Full Text Available Introduction. A simple microscopic diagnostic quantification system for neutrophile granulocytes (NG was developed evaluating a single focal point (CD15 focus score which enables the detection of bacterial infection in SLIM (synoviallike interface membrane Additionally a diagnostic algorithm is proposed how to use the CD15 focus score and the quantification software (CD15 Quantifier. Methods. 91 SLIM removed during revision surgery for histopathological diagnosis (hip; n=59 and knee; n=32 underwent histopathological classification according to the SLIM-consensus classification. NG where identified immunohistochemically by means of a CD15-specific monoclonal antibody exhibiting an intense granular cytoplasmic staining pattern. This pattern is different from CD15 expression in macrophages showing a pale and homogenous expression in mononuclear cells. The quantitative evaluation of CD15-positive neutrophils granulocytes (CD15NG used the principle of maximum focal infiltration (focus together with an assessment of a single focal point (approximately 0.3 mm2. This immunohistochemical data made it possible to develop CD15 Quantifier software which automatically quantifies CD15NG. Results. SLIM-cases with positive microbiological diagnosis (n=47 have significantly (p<0.001, Mann-Whitney U test more CD15NG/focal point than cases with negative microbiological diagnosis (n=44. 50 CD15NG/focal point were identified as the optimum threshold when diagnosing infection of periprosthetic joints using the CD15 focus score. If the microbiological findings are used as a ‘gold standard’ the diagnostic sensitivity is 0.83, specificity is 0.864. (PPV: 0.87; NPV: 0.83; accuracy 0.846; AUC: 0.878. The evaluation findings for the preparations using the CD15 Quantifier (n=31 deviated in an average of 12 cells from the histopathological evaluation findings (CD15focus score. From a cell-count greater 62 CD15 Quantifier needs on average 32 seconds less than the

  1. Calcium scoring with dual-energy CT in men and women: an anthropomorphic phantom study

    Science.gov (United States)

    Li, Qin; Liu, Songtao; Myers, Kyle; Gavrielides, Marios A.; Zeng, Rongping; Sahiner, Berkman; Petrick, Nicholas

    2016-03-01

    This work aimed to quantify and compare the potential impact of gender differences on coronary artery calcium scoring with dual-energy CT. An anthropomorphic thorax phantom with four synthetic heart vessels (diameter 3-4.5 mm: female/male left main and left circumflex artery) were scanned with and without female breast plates. Ten repeat scans were acquired in both single- and dual-energy modes and reconstructed at six reconstruction settings: two slice thicknesses (3 mm, 0.6 mm) and three reconstruction algorithms (FBP, IR3, IR5). Agatston and calcium volume scores were estimated from the reconstructed data using a segmentation-based approach. Total calcium score (summation of four vessels), and male/female calcium scores (summation of male/female vessels scanned in phantom without/with breast plates) were calculated accordingly. Both Agatston and calcium volume scores were found comparable between single- and dual-energy scans (Pearson r= 0.99, pwomen and men in calcium scoring, and for standardizing imaging protocols for improved gender-specific calcium scoring.

  2. A scoring system for ascertainment of incident stroke; the Risk Index Score (RISc).

    Science.gov (United States)

    Kass-Hout, T A; Moyé, L A; Smith, M A; Morgenstern, L B

    2006-01-01

    The main objective of this study was to develop and validate a computer-based statistical algorithm that could be translated into a simple scoring system in order to ascertain incident stroke cases using hospital admission medical records data. The Risk Index Score (RISc) algorithm was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christi (BASIC) project, 2000. The validity of RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment by physician and/or abstractor review of hospital admission records. RISc was developed on 1718 randomly selected patients (training set) and then statistically validated on an independent sample of 858 patients (validation set). A multivariable logistic model was used to develop RISc and subsequently evaluated by goodness-of-fit and receiver operating characteristic (ROC) analyses. The higher the value of RISc, the higher the patient's risk of potential stroke. The study showed RISc was well calibrated and discriminated those who had potential stroke from those that did not on initial screening. In this study we developed and validated a rapid, easy, efficient, and accurate method to ascertain incident stroke cases from routine hospital admission records for epidemiologic investigations. Validation of this scoring system was achieved statistically; however, clinical validation in a community hospital setting is warranted.

  3. The impact of CT radiation dose reduction and iterative reconstruction algorithms from four different vendors on coronary calcium scoring

    Energy Technology Data Exchange (ETDEWEB)

    Willemink, Martin J.; Takx, Richard A.P.; Jong, Pim A. de; Budde, Ricardo P.J.; Schilham, Arnold M.R.; Leiner, Tim [Utrecht University Medical Center, Department of Radiology, Utrecht (Netherlands); Bleys, Ronald L.A.W. [Utrecht University Medical Center, Department of Anatomy, Utrecht (Netherlands); Das, Marco; Wildberger, Joachim E. [Maastricht University Medical Center, Department of Radiology, Maastricht (Netherlands); Prokop, Mathias [Radboud University Nijmegen Medical Center, Department of Radiology, Nijmegen (Netherlands); Buls, Nico; Mey, Johan de [UZ Brussel, Department of Radiology, Brussels (Belgium)

    2014-09-15

    To analyse the effects of radiation dose reduction and iterative reconstruction (IR) algorithms on coronary calcium scoring (CCS). Fifteen ex vivo human hearts were examined in an anthropomorphic chest phantom using computed tomography (CT) systems from four vendors and examined at four dose levels using unenhanced prospectively ECG-triggered protocols. Tube voltage was 120 kV and tube current differed between protocols. CT data were reconstructed with filtered back projection (FBP) and reduced dose CT data with IR. CCS was quantified with Agatston scores, calcification mass and calcification volume. Differences were analysed with the Friedman test. Fourteen hearts showed coronary calcifications. Dose reduction with FBP did not significantly change Agatston scores, calcification volumes and calcification masses (P > 0.05). Maximum differences in Agatston scores were 76, 26, 51 and 161 units, in calcification volume 97, 27, 42 and 162 mm{sup 3}, and in calcification mass 23, 23, 20 and 48 mg, respectively. IR resulted in a trend towards lower Agatston scores and calcification volumes with significant differences for one vendor (P < 0.05). Median relative differences between reference FBP and reduced dose IR for Agatston scores remained within 2.0-4.6 %, 1.0-5.3 %, 1.2-7.7 % and 2.6-4.5 %, for calcification volumes within 2.4-3.9 %, 1.0-5.6 %, 1.1-6.4 % and 3.7-4.7 %, for calcification masses within 1.9-4.1 %, 0.9-7.8 %, 2.9-4.7 % and 2.5-3.9 %, respectively. IR resulted in increased, decreased or similar calcification masses. CCS derived from standard FBP acquisitions was not affected by radiation dose reductions up to 80 %. IR resulted in a trend towards lower Agatston scores and calcification volumes. (orig.)

  4. Answer Extraction Based on Merging Score Strategy of Hot Terms

    Institute of Scientific and Technical Information of China (English)

    LE Juan; ZHANG Chunxia; NIU Zhendong

    2016-01-01

    Answer extraction (AE) is one of the key technologies in developing the open domain Question&an-swer (Q&A) system . Its task is to yield the highest score to the expected answer based on an effective answer score strategy. We introduce an answer extraction method by Merging score strategy (MSS) based on hot terms. The hot terms are defined according to their lexical and syn-tactic features to highlight the role of the question terms. To cope with the syntactic diversities of the corpus, we propose four improved candidate answer score algorithms. Each of them is based on the lexical function of hot terms and their syntactic relationships with the candidate an-swers. Two independent corpus score algorithms are pro-posed to tap the role of the corpus in ranking the candi-date answers. Six algorithms are adopted in MSS to tap the complementary action among the corpus, the candi-date answers and the questions. Experiments demonstrate the effectiveness of the proposed strategy.

  5. Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm

    International Nuclear Information System (INIS)

    Min, Jong Hwan

    2011-02-01

    Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use in other application such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical dual-energy calibration method was used to prepare material-specific projection data. Raw data at high and low tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data using the coefficients obtained through the calibration process. From much fewer views than are conventionally used, material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only and acryl-only images were successfully decomposed. We evaluated the quality of the reconstructed images by use of contrast-to-noise ratio and detectability. A low-dose dual-energy CBCT can be realized via the proposed method by greatly reducing the number of projections

  6. Effect of Antihypertensive Therapy on SCORE-Estimated Total Cardiovascular Risk: Results from an Open-Label, Multinational Investigation—The POWER Survey

    Directory of Open Access Journals (Sweden)

    Guy De Backer

    2013-01-01

    Full Text Available Background. High blood pressure is a substantial risk factor for cardiovascular disease. Design & Methods. The Physicians' Observational Work on patient Education according to their vascular Risk (POWER survey was an open-label investigation of eprosartan-based therapy (EBT for control of high blood pressure in primary care centers in 16 countries. A prespecified element of this research was appraisal of the impact of EBT on estimated 10-year risk of a fatal cardiovascular event as determined by the Systematic Coronary Risk Evaluation (SCORE model. Results. SCORE estimates of CVD risk were obtained at baseline from 12,718 patients in 15 countries (6504 men and from 9577 patients at 6 months. During EBT mean (±SD systolic/diastolic blood pressures declined from 160.2 ± 13.7/94.1 ± 9.1 mmHg to 134.5 ± 11.2/81.4 ± 7.4 mmHg. This was accompanied by a 38% reduction in mean SCORE-estimated CVD risk and an improvement in SCORE risk classification of one category or more in 3506 patients (36.6%. Conclusion. Experience in POWER affirms that (a effective pharmacological control of blood pressure is feasible in the primary care setting and is accompanied by a reduction in total CVD risk and (b the SCORE instrument is effective in this setting for the monitoring of total CVD risk.

  7. Effect of Antihypertensive Therapy on SCORE-Estimated Total Cardiovascular Risk: Results from an Open-Label, Multinational Investigation—The POWER Survey

    Science.gov (United States)

    De Backer, Guy; Petrella, Robert J.; Goudev, Assen R.; Radaideh, Ghazi Ahmad; Rynkiewicz, Andrzej; Pathak, Atul

    2013-01-01

    Background. High blood pressure is a substantial risk factor for cardiovascular disease. Design & Methods. The Physicians' Observational Work on patient Education according to their vascular Risk (POWER) survey was an open-label investigation of eprosartan-based therapy (EBT) for control of high blood pressure in primary care centers in 16 countries. A prespecified element of this research was appraisal of the impact of EBT on estimated 10-year risk of a fatal cardiovascular event as determined by the Systematic Coronary Risk Evaluation (SCORE) model. Results. SCORE estimates of CVD risk were obtained at baseline from 12,718 patients in 15 countries (6504 men) and from 9577 patients at 6 months. During EBT mean (±SD) systolic/diastolic blood pressures declined from 160.2 ± 13.7/94.1 ± 9.1 mmHg to 134.5 ± 11.2/81.4 ± 7.4 mmHg. This was accompanied by a 38% reduction in mean SCORE-estimated CVD risk and an improvement in SCORE risk classification of one category or more in 3506 patients (36.6%). Conclusion. Experience in POWER affirms that (a) effective pharmacological control of blood pressure is feasible in the primary care setting and is accompanied by a reduction in total CVD risk and (b) the SCORE instrument is effective in this setting for the monitoring of total CVD risk. PMID:23997946

  8. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  9. Impact of the Occlusion Duration on the Performance of J-CTO Score in Predicting Failure of Percutaneous Coronary Intervention for Chronic Total Occlusion.

    Science.gov (United States)

    de Castro-Filho, Antonio; Lamas, Edgar Stroppa; Meneguz-Moreno, Rafael A; Staico, Rodolfo; Siqueira, Dimytri; Costa, Ricardo A; Braga, Sergio N; Costa, J Ribamar; Chamié, Daniel; Abizaid, Alexandre

    2017-06-01

    The present study examined the association between Multicenter CTO Registry in Japan (J-CTO) score in predicting failure of percutaneous coronary intervention (PCI) correlating with the estimated duration of chronic total occlusion (CTO). The J-CTO score does not incorporate estimated duration of the occlusion. This was an observational retrospective study that involved all consecutive procedures performed at a single tertiary-care cardiology center between January 2009 and December 2014. A total of 174 patients, median age 59.5 years (interquartile range [IQR], 53-65 years), undergoing CTO-PCI were included. The median estimated occlusion duration was 7.5 months (IQR, 4.0-12.0 months). The lesions were classified as easy (score = 0), intermediate (score = 1), difficult (score = 2), and very difficult (score ≥3) in 51.1%, 33.9%, 9.2%, and 5.7% of the patients, respectively. Failure rate significantly increased with higher J-CTO score (7.9%, 20.3%, 50.0%, and 70.0% in groups with J-CTO scores of 0, 1, 2, and ≥3, respectively; PJ-CTO score predicted failure of CTO-PCI independently of the estimated occlusion duration (P=.24). Areas under receiver-operating characteristic curves were computed and it was observed that for each occlusion time period, the discriminatory capacity of the J-CTO score in predicting CTO-PCI failure was good, with a C-statistic >0.70. The estimated duration of occlusion had no influence on the J-CTO score performance in predicting failure of PCI in CTO lesions. The probability of failure was mainly determined by grade of lesion complexity.

  10. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  11. Knee injury and Osteoarthritis Outcome Score (KOOS – validation and comparison to the WOMAC in total knee replacement

    Directory of Open Access Journals (Sweden)

    Roos Ewa M

    2003-05-01

    Full Text Available Abstract Background The Knee injury and Osteoarthritis Outcome Score (KOOS is an extension of the Western Ontario and McMaster Universities Osteoarthrtis Index (WOMAC, the most commonly used outcome instrument for assessment of patient-relevant treatment effects in osteoarthritis. KOOS was developed for younger and/or more active patients with knee injury and knee osteoarthritis and has in previous studies on these groups been the more responsive instrument compared to the WOMAC. Some patients eligible for total knee replacement have expectations of more demanding physical functions than required for daily living. This encouraged us to study the use of the Knee injury and Osteoarthritis Outcome Score (KOOS to assess the outcome of total knee replacement. Methods We studied the test-retest reliability, validity and responsiveness of the Swedish version LK 1.0 of the KOOS when used to prospectively evaluate the outcome of 105 patients (mean age 71.3, 66 women after total knee replacement. The follow-up rates at 6 and 12 months were 92% and 86%, respectively. Results The intraclass correlation coefficients were over 0.75 for all subscales indicating sufficient test-retest reliability. Bland-Altman plots confirmed this finding. Over 90% of the patients regarded improvement in the subscales Pain, Symptoms, Activities of Daily Living, and knee-related Quality of Life to be extremely or very important when deciding to have their knee operated on indicating good content validity. The correlations found in comparison to the SF-36 indicated the KOOS measured expected constructs. The most responsive subscale was knee-related Quality of Life. The effect sizes of the five KOOS subscales at 12 months ranged from 1.08 to 3.54 and for the WOMAC from 1.65 to 2.56. Conclusion The Knee injury and Osteoarthritis Outcome Score (KOOS is a valid, reliable, and responsive outcome measure in total joint replacement. In comparison to the WOMAC, the KOOS improved validity

  12. Opportunistic splitting for scheduling using a score-based approach

    KAUST Repository

    Rashid, Faraan

    2012-06-01

    We consider the problem of scheduling a user in a multi-user wireless environment in a distributed manner. The opportunistic splitting algorithm is applied to find the best group of users without reporting the channel state information to the centralized scheduler. The users find the best among themselves while requiring just a ternary feedback from the common receiver at the end of each mini-slot. The original splitting algorithm is modified to handle users with asymmetric channel conditions. We use a score-based approach with the splitting algorithm to introduce time and throughput fairness while exploiting the multi-user diversity of the network. Analytical and simulation results are given to show that the modified score-based splitting algorithm works well as a fair scheduling scheme with good spectral efficiency and reduced feedback. © 2012 IEEE.

  13. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  14. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    Science.gov (United States)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  15. Differences of wells scores accuracy, caprini scores and padua scores in deep vein thrombosis diagnosis

    Science.gov (United States)

    Gatot, D.; Mardia, A. I.

    2018-03-01

    Deep Vein Thrombosis (DVT) is the venous thrombus in lower limbs. Diagnosis is by using venography or ultrasound compression. However, these examinations are not available yet in some health facilities. Therefore many scoring systems are developed for the diagnosis of DVT. The scoring method is practical and safe to use in addition to efficacy, and effectiveness in terms of treatment and costs. The existing scoring systems are wells, caprini and padua score. There have been many studies comparing the accuracy of this score but not in Medan. Therefore, we are interested in comparative research of wells, capriniand padua score in Medan.An observational, analytical, case-control study was conducted to perform diagnostic tests on the wells, caprini and padua score to predict the risk of DVT. The study was at H. Adam Malik Hospital in Medan.From a total of 72 subjects, 39 people (54.2%) are men and the mean age are 53.14 years. Wells score, caprini score and padua score has a sensitivity of 80.6%; 61.1%, 50% respectively; specificity of 80.65; 66.7%; 75% respectively, and accuracy of 87.5%; 64.3%; 65.7% respectively.Wells score has better sensitivity, specificity and accuracy than caprini and padua score in diagnosing DVT.

  16. Total ozone column derived from GOME and SCIAMACHY using KNMI retrieval algorithms: Validation against Brewer measurements at the Iberian Peninsula

    Science.gov (United States)

    Antón, M.; Kroon, M.; López, M.; Vilaplana, J. M.; Bañón, M.; van der A, R.; Veefkind, J. P.; Stammes, P.; Alados-Arboledas, L.

    2011-11-01

    This article focuses on the validation of the total ozone column (TOC) data set acquired by the Global Ozone Monitoring Experiment (GOME) and the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) satellite remote sensing instruments using the Total Ozone Retrieval Scheme for the GOME Instrument Based on the Ozone Monitoring Instrument (TOGOMI) and Total Ozone Retrieval Scheme for the SCIAMACHY Instrument Based on the Ozone Monitoring Instrument (TOSOMI) retrieval algorithms developed by the Royal Netherlands Meteorological Institute. In this analysis, spatially colocated, daily averaged ground-based observations performed by five well-calibrated Brewer spectrophotometers at the Iberian Peninsula are used. The period of study runs from January 2004 to December 2009. The agreement between satellite and ground-based TOC data is excellent (R2 higher than 0.94). Nevertheless, the TOC data derived from both satellite instruments underestimate the ground-based data. On average, this underestimation is 1.1% for GOME and 1.3% for SCIAMACHY. The SCIAMACHY-Brewer TOC differences show a significant solar zenith angle (SZA) dependence which causes a systematic seasonal dependence. By contrast, GOME-Brewer TOC differences show no significant SZA dependence and hence no seasonality although processed with exactly the same algorithm. The satellite-Brewer TOC differences for the two satellite instruments show a clear and similar dependence on the viewing zenith angle under cloudy conditions. In addition, both the GOME-Brewer and SCIAMACHY-Brewer TOC differences reveal a very similar behavior with respect to the satellite cloud properties, being cloud fraction and cloud top pressure, which originate from the same cloud algorithm (Fast Retrieval Scheme for Clouds from the Oxygen A-Band (FRESCO+)) in both the TOSOMI and TOGOMI retrieval algorithms.

  17. Pattern analysis of total item score and item response of the Kessler Screening Scale for Psychological Distress (K6 in a nationally representative sample of US adults

    Directory of Open Access Journals (Sweden)

    Shinichiro Tomitaka

    2017-02-01

    Full Text Available Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D. To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6 in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS, which comprises four subsamples: (1 a national random digit dialing (RDD sample, (2 oversamples from five metropolitan areas, (3 siblings of individuals from the RDD sample, and (4 a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales.

  18. Computer-Assisted Automated Scoring of Polysomnograms Using the Somnolyzer System.

    Science.gov (United States)

    Punjabi, Naresh M; Shifa, Naima; Dorffner, Georg; Patil, Susheel; Pien, Grace; Aurora, Rashmi N

    2015-10-01

    Manual scoring of polysomnograms is a time-consuming and tedious process. To expedite the scoring of polysomnograms, several computerized algorithms for automated scoring have been developed. The overarching goal of this study was to determine the validity of the Somnolyzer system, an automated system for scoring polysomnograms. The analysis sample comprised of 97 sleep studies. Each polysomnogram was manually scored by certified technologists from four sleep laboratories and concurrently subjected to automated scoring by the Somnolyzer system. Agreement between manual and automated scoring was examined. Sleep staging and scoring of disordered breathing events was conducted using the 2007 American Academy of Sleep Medicine criteria. Clinical sleep laboratories. A high degree of agreement was noted between manual and automated scoring of the apnea-hypopnea index (AHI). The average correlation between the manually scored AHI across the four clinical sites was 0.92 (95% confidence interval: 0.90-0.93). Similarly, the average correlation between the manual and Somnolyzer-scored AHI values was 0.93 (95% confidence interval: 0.91-0.96). Thus, interscorer correlation between the manually scored results was no different than that derived from manual and automated scoring. Substantial concordance in the arousal index, total sleep time, and sleep efficiency between manual and automated scoring was also observed. In contrast, differences were noted between manually and automated scored percentages of sleep stages N1, N2, and N3. Automated analysis of polysomnograms using the Somnolyzer system provides results that are comparable to manual scoring for commonly used metrics in sleep medicine. Although differences exist between manual versus automated scoring for specific sleep stages, the level of agreement between manual and automated scoring is not significantly different than that between any two human scorers. In light of the burden associated with manual scoring, automated

  19. Systematic Analysis of Painful Total Knee Prosthesis, a Diagnostic Algorithm

    Directory of Open Access Journals (Sweden)

    Oliver Djahani

    2013-12-01

    Full Text Available   Remaining pain after total knee arthroplasty (TKA is a common observation in about 20% of postoperative patients; where in about 60% of these knees require early revision surgery within five years. Obvious causes of this pain could be identified simply with clinical examinations and standard radiographs. However, unexplained painful TKA still remains a challenge for the surgeon. The management should include a multidisciplinary approach to the patient`s pain as well as addressing the underlying etiology. There are a number of extrinsic (tendinopathy, hip, ankle, spine, CRPS and so on and intrinsic (infection, instability, malalignment, wear and so on causes of painful knee replacement. On average, diagnosis takes more than 12 months and patients become very dissatisfied and some of them even acquire psychological problems. Hence, a systematic diagnostic algorithm might be helpful. This review article aims to act as a guide to the evaluation of patients with painful TKA described in 10 different steps. Furthermore, the preliminary results of a series of 100 consecutive cases will be discussed. Revision surgery was performed only in those cases with clear failure mechanism.

  20. A Novel Ant Colony Algorithm for the Single-Machine Total Weighted Tardiness Problem with Sequence Dependent Setup Times

    Directory of Open Access Journals (Sweden)

    Fardin Ahmadizar

    2011-08-01

    Full Text Available This paper deals with the NP-hard single-machine total weighted tardiness problem with sequence dependent setup times. Incorporating fuzzy sets and genetic operators, a novel ant colony optimization algorithm is developed for the problem. In the proposed algorithm, artificial ants construct solutions as orders of jobs based on the heuristic information as well as pheromone trails. To calculate the heuristic information, three well-known priority rules are adopted as fuzzy sets and then aggregated. When all artificial ants have terminated their constructions, genetic operators such as crossover and mutation are applied to generate new regions of the solution space. A local search is then performed to improve the performance quality of some of the solutions found. Moreover, at run-time the pheromone trails are locally as well as globally updated, and limited between lower and upper bounds. The proposed algorithm is experimented on a set of benchmark problems from the literature and compared with other metaheuristics.

  1. Avascular Necrosis Is Associated With Increased Transfusions and Readmission Following Primary Total Hip Arthroplasty.

    Science.gov (United States)

    Lovecchio, Francis C; Manalo, John Paul; Demzik, Alysen; Sahota, Shawn; Beal, Matthew; Manning, David

    2017-05-01

    Avascular necrosis (AVN) may confer an increased risk of complications and readmission following total hip arthroplasty (THA). However, current risk-adjustment models do not account for AVN. A total of 1706 patients who underwent THA for AVN from 2011 to 2013 were selected from the American College of Surgeon's National Surgical Quality Improvement Program database and matched 1:1 to controls using a predetermined propensity score algorithm. Rates of 30-day medical and surgical complications, readmissions, and reoperations were compared between cohorts. Propensity-score logistic regression was used to determine independent associations between AVN and outcomes of interest. Patients with AVN had a higher rate of medical complications than those without AVN (20.3% vs 15.3%, respectively; PAvascular necrosis of the femoral head is an independent risk factor for transfusion up to 72 hours postoperatively and readmission up to 30 days following total hip replacement. [Orthopedics. 2017; 40(3):171-176.]. Copyright 2017, SLACK Incorporated.

  2. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Science.gov (United States)

    Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.

    2017-01-01

    Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p health behaviors and outcomes. PMID:28282878

  3. Prediction of antigenic epitopes on protein surfaces by consensus scoring

    Directory of Open Access Journals (Sweden)

    Zhang Chi

    2009-09-01

    Full Text Available Abstract Background Prediction of antigenic epitopes on protein surfaces is important for vaccine design. Most existing epitope prediction methods focus on protein sequences to predict continuous epitopes linear in sequence. Only a few structure-based epitope prediction algorithms are available and they have not yet shown satisfying performance. Results We present a new antigen Epitope Prediction method, which uses ConsEnsus Scoring (EPCES from six different scoring functions - residue epitope propensity, conservation score, side-chain energy score, contact number, surface planarity score, and secondary structure composition. Applied to unbounded antigen structures from an independent test set, EPCES was able to predict antigenic eptitopes with 47.8% sensitivity, 69.5% specificity and an AUC value of 0.632. The performance of the method is statistically similar to other published methods. The AUC value of EPCES is slightly higher compared to the best results of existing algorithms by about 0.034. Conclusion Our work shows consensus scoring of multiple features has a better performance than any single term. The successful prediction is also due to the new score of residue epitope propensity based on atomic solvent accessibility.

  4. A new algorithm to determine the total radiated power at ASDEX upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Gloeggler, Stephan; Bernert, Matthias; Eich, Thomas [Max Planck Institute for Plasma Physics, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: The ASDEX Upgrade Team

    2016-07-01

    Radiation is an essential part of the power balance in a fusion plasma. In future fusion devices about 90% of the power will have to be dissipated, mainly by radiation. For the development of an appropriate operational scenario, information about the absolute level of plasma radiation (P{sub rad,tot}) is crucial. Bolometers are used to measure the radiated power, however, an algorithm is required to derive the absolute power out of many line-integrated measurements. The currently used algorithm (BPD) was developed for the main chamber radiation. It underestimates the divertor radiation as its basic assumptions are not satisfied in this region. Therefore, a new P{sub rad,tot} algorithm is presented. It applies an Abel inversion on the main chamber and uses empirically based assumptions for poloidal asymmetries and the divertor radiation. To benchmark the new algorithm, synthetic emissivity profiles are used. On average, the new Abel inversion based algorithm deviates by only 10% from the nominal synthetic value while BPD is about 25% too low. With both codes time traces of ASDEX Upgrade discharges are calculated. The analysis of these time traces shows that the underestimation of the divertor radiation can have significant consequences on the accuracy of BPD while the new algorithm is shown to be stable.

  5. Pharmacokinetic-pharmacodynamic modeling of antipsychotic drugs in patients with schizophrenia Part I : The use of PANSS total score and clinical utility

    NARCIS (Netherlands)

    Reddy, Venkatesh Pilla; Kozielska, Magdalena; Suleiman, Ahmed Abbas; Johnson, Martin; Vermeulen, An; Liu, Jing; de Greef, Rik; Groothuis, Geny M. M.; Danhof, Meindert; Proost, Johannes H.

    Background: To develop a pharmacokinetic-pharmacodynamic (PK-PD) model using individual-level data of Positive and Negative Syndrome Scale (PANSS) total score to characterize the antipsychotic drug effect taking into account the placebo effect and dropout rate. In addition, a clinical utility (CU)

  6. Analysing relations between specific and total liking scores

    DEFF Research Database (Denmark)

    Menichelli, Elena; Kraggerud, Hilde; Olsen, Nina Veflen

    2013-01-01

    The objective of this article is to present a new statistical approach for the study of consumer liking. Total liking data are extended by incorporating liking for specific sensory properties. The approach combines different analyses for the purpose of investigating the most important aspects...... of liking and indicating which products are similarly or differently perceived by which consumers. A method based on the differences between total liking and the specific liking variables is proposed for studying both relative differences among products and individual consumer differences. Segmentation...... is also tested out in order to distinguish consumers with the strongest differences in their liking values. The approach is illustrated by a case study, based on cheese data. In the consumer test consumers were asked to evaluate their total liking, the liking for texture and the liking for odour/taste. (C...

  7. Comparison of scoring approaches for the NEI VFQ-25 in low vision.

    Science.gov (United States)

    Dougherty, Bradley E; Bullimore, Mark A

    2010-08-01

    The aim of this study was to evaluate different approaches to scoring the National Eye Institute Visual Functioning Questionnaire-25 (NEI VFQ-25) in patients with low vision including scoring by the standard method, by Rasch analysis, and by use of an algorithm created by Massof to approximate Rasch person measure. Subscale validity and use of a 7-item short form instrument proposed by Ryan et al. were also investigated. NEI VFQ-25 data from 50 patients with low vision were analyzed using the standard method of summing Likert-type scores and calculating an overall average, Rasch analysis using Winsteps software, and the Massof algorithm in Excel. Correlations between scores were calculated. Rasch person separation reliability and other indicators were calculated to determine the validity of the subscales and of the 7-item instrument. Scores calculated using all three methods were highly correlated, but evidence of floor and ceiling effects was found with the standard scoring method. None of the subscales investigated proved valid. The 7-item instrument showed acceptable person separation reliability and good targeting and item performance. Although standard scores and Rasch scores are highly correlated, Rasch analysis has the advantages of eliminating floor and ceiling effects and producing interval-scaled data. The Massof algorithm for approximation of the Rasch person measure performed well in this group of low-vision patients. The validity of the subscales VFQ-25 should be reconsidered.

  8. Algorithms for Academic Search and Recommendation Systems

    DEFF Research Database (Denmark)

    Amolochitis, Emmanouil

    2014-01-01

    are part of a developed Movie Recommendation system, the first such system to be commercially deployed in Greece by a major Triple Play services provider. In the third part of the work we present the design of a quantitative association rule mining algorithm. The introduced mining algorithm processes......In this work we present novel algorithms for academic search, recommendation and association rules mining. In the first part of the work we introduce a novel hierarchical heuristic scheme for re-ranking academic publications. The scheme is based on the hierarchical combination of a custom...... implementation of the term frequency heuristic, a time-depreciated citation score and a graph-theoretic computed score that relates the paper’s index terms with each other. On the second part we describe the design of hybrid recommender ensemble (user, item and content based). The newly introduced algorithms...

  9. Cross-modal face recognition using multi-matcher face scores

    Science.gov (United States)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  10. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  11. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    Science.gov (United States)

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-12-01

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0

  12. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  13. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    International Nuclear Information System (INIS)

    Smarda, M; Alexopoulou, E; Mazioti, A; Kordolaimi, S; Ploussi, A; Efstathopoulos, E; Priftis, K

    2015-01-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions. (paper)

  14. Robust total energy demand estimation with a hybrid Variable Neighborhood Search – Extreme Learning Machine algorithm

    International Nuclear Information System (INIS)

    Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.

    2016-01-01

    Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.

  15. Using general-purpose compression algorithms for music analysis

    DEFF Research Database (Denmark)

    Louboutin, Corentin; Meredith, David

    2016-01-01

    General-purpose compression algorithms encode files as dictionaries of substrings with the positions of these strings’ occurrences. We hypothesized that such algorithms could be used for pattern discovery in music. We compared LZ77, LZ78, Burrows–Wheeler and COSIATEC on classifying folk song...... in the input data, COSIATEC outperformed LZ77 with a mean F1 score of 0.123, compared with 0.053 for LZ77. However, when the music was processed a voice at a time, the F1 score for LZ77 more than doubled to 0.124. We also discovered a significant correlation between compression factor and F1 score for all...

  16. A collaborative filtering approach for protein-protein docking scoring functions.

    Science.gov (United States)

    Bourquard, Thomas; Bernauer, Julie; Azé, Jérôme; Poupon, Anne

    2011-04-22

    A protein-protein docking procedure traditionally consists in two successive tasks: a search algorithm generates a large number of candidate conformations mimicking the complex existing in vivo between two proteins, and a scoring function is used to rank them in order to extract a native-like one. We have already shown that using Voronoi constructions and a well chosen set of parameters, an accurate scoring function could be designed and optimized. However to be able to perform large-scale in silico exploration of the interactome, a near-native solution has to be found in the ten best-ranked solutions. This cannot yet be guaranteed by any of the existing scoring functions. In this work, we introduce a new procedure for conformation ranking. We previously developed a set of scoring functions where learning was performed using a genetic algorithm. These functions were used to assign a rank to each possible conformation. We now have a refined rank using different classifiers (decision trees, rules and support vector machines) in a collaborative filtering scheme. The scoring function newly obtained is evaluated using 10 fold cross-validation, and compared to the functions obtained using either genetic algorithms or collaborative filtering taken separately. This new approach was successfully applied to the CAPRI scoring ensembles. We show that for 10 targets out of 12, we are able to find a near-native conformation in the 10 best ranked solutions. Moreover, for 6 of them, the near-native conformation selected is of high accuracy. Finally, we show that this function dramatically enriches the 100 best-ranking conformations in near-native structures.

  17. Coronary collateral circulation in patients with chronic coronary total occlusion; its relationship with cardiac risk markers and SYNTAX score.

    Science.gov (United States)

    Börekçi, A; Gür, M; Şeker, T; Baykan, A O; Özaltun, B; Karakoyun, S; Karakurt, A; Türkoğlu, C; Makça, I; Çaylı, M

    2015-09-01

    Compared to patients without a collateral supply, long-term cardiac mortality is reduced in patients with well-developed coronary collateral circulation (CCC). Cardiovascular risk markers, such as N-terminal pro-brain natriuretic peptide (NT-proBNP), high-sensitive C-reactive protein (hs-CRP) and high-sensitive cardiac troponin T (hs-cTnT) are independent predictors for cardiovascular mortality. The main goal of this study was to examine the relationship between CCC and cardiovascular risk markers. We prospectively enrolled 427 stable coronary artery disease patients with chronic total occlusion (mean age: 57.5±11.1 years). The patients were divided into two groups, according to their Rentrop scores: (a) poorly developed CCC group (Rentrop 0 and 1) and (b) well-developed CCC group (Rentrop 2 and 3). NT-proBNP, hs-CRP, hs-cTnT, uric acid and other biochemical markers were also measured. The SYNTAX score was calculated for all patients. The patients in the poorly developed CCC group had higher frequencies of diabetes and hypertension (prisk markers, such as NT-proBNP, hs-cTnT and hs-CRP are independently associated with CCC in stable coronary artery disease with chronic total occlusion. © The Author(s) 2014.

  18. Evaluation of an automated single-channel sleep staging algorithm

    Directory of Open Access Journals (Sweden)

    Wang Y

    2015-09-01

    , Total Deep Sleep, and Total REM. Results: Sensitivities of Z-PLUS compared to the PSG Consensus were 0.84 for Light Sleep, 0.74 for Deep Sleep, and 0.72 for REM. Similarly, positive predictive values were 0.85 for Light Sleep, 0.78 for Deep Sleep, and 0.73 for REM. Overall, kappa agreement of 0.72 is indicative of substantial agreement. Conclusion: This study demonstrates that Z-PLUS can automatically assess sleep stage using a single A1–A2 EEG channel when compared to the sleep stage scoring by a consensus of polysomnographic technologists. Our findings suggest that Z-PLUS may be used in conjunction with Z-ALG for single-channel EEG-based sleep staging. Keywords: EEG, sleep staging, algorithm, Zmachine, automatic sleep scoring, sleep detection, single channel

  19. Evolving attractive faces using morphing technology and a genetic algorithm: a new approach to determining ideal facial aesthetics.

    Science.gov (United States)

    Wong, Brian J F; Karimi, Koohyar; Devcic, Zlatko; McLaren, Christine E; Chen, Wen-Pin

    2008-06-01

    The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Basic research study incorporating focus group evaluations. Digital images were acquired of 250 female volunteers (18-25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18-25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cos-metology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (+/-0.73), 5.50 (+/-0.62), 6.23 (+/-0.31), and 6.39 (+/-0.24) for P and F1-F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness scores. Multivariate analysis identified a

  20. Alternative Payment Models Should Risk-Adjust for Conversion Total Hip Arthroplasty: A Propensity Score-Matched Study.

    Science.gov (United States)

    McLawhorn, Alexander S; Schairer, William W; Schwarzkopf, Ran; Halsey, David A; Iorio, Richard; Padgett, Douglas E

    2017-12-06

    For Medicare beneficiaries, hospital reimbursement for nonrevision hip arthroplasty is anchored to either diagnosis-related group code 469 or 470. Under alternative payment models, reimbursement for care episodes is not further risk-adjusted. This study's purpose was to compare outcomes of primary total hip arthroplasty (THA) vs conversion THA to explore the rationale for risk adjustment for conversion procedures. All primary and conversion THAs from 2007 to 2014, excluding acute hip fractures and cancer patients, were identified in the National Surgical Quality Improvement Program database. Conversion and primary THA patients were matched 1:1 using propensity scores, based on preoperative covariates. Multivariable logistic regressions evaluated associations between conversion THA and 30-day outcomes. A total of 2018 conversions were matched to 2018 primaries. There were no differences in preoperative covariates. Conversions had longer operative times (148 vs 95 minutes, P reimbursement models shift toward bundled payment paradigms, conversion THA appears to be a procedure for which risk adjustment is appropriate. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Application of independent component analysis for speech-music separation using an efficient score function estimation

    Science.gov (United States)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  2. Evaluation of Scoring Skills and Non Scoring Skills in the Brazilian SuperLeague Women’s Volleyball

    Directory of Open Access Journals (Sweden)

    Aluizio Otávio Gouvêa Ferreira Oliveira

    2016-09-01

    Full Text Available This study analyzed all the games (n=253 from the 2011/2012 and 2012/2013 Seasons of Brazilian SuperLeague Women’s Volleyball, to identify the game-related factors that discriminate in favor of winning and losing teams. In the 2011/2012 Season, the Total Shares Setting (TAL and Total Points Attack (TPA were factors that discriminated in favor of a defeat. The factors that determined the victory were the Total Shares Serve (TAS, Total Shares Defense (TAD, Total Shares Reception (TAR and Total Defense Excellent (TDE. In the 2012/2013 Season, the factor (TAD most often discriminated in favor of victory and the factor that led to defeat was the Total Points Made (TPF. The scoring skills (TPA and (TPF discriminated against the final outcome of the game, but surprisingly are associated with defeat and the (TAS supposed to victory. The non-scoring skills (TAD, (TAR and (TDE discriminate the end result of the game and this may be associated with the victory. The non-scoring skill (TAL determines the outcome of the game and is supposedly associated with the defeat.

  3. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  4. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Directory of Open Access Journals (Sweden)

    Joel Adu-Brimpong

    2017-03-01

    Full Text Available Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist, a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783 participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions. Twelve street segments per home address were assessed for (1 Land-Use Type; (2 Public Transportation Availability; (3 Street Characteristics; (4 Environment Quality and (5 Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9 and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6. Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3. Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001. This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.

  5. Methods and statistics for combining motif match scores.

    Science.gov (United States)

    Bailey, T L; Gribskov, M

    1998-01-01

    Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.

  6. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    Science.gov (United States)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  7. Sentiment analysis enhancement with target variable in Kumar’s Algorithm

    Science.gov (United States)

    Arman, A. A.; Kawi, A. B.; Hurriyati, R.

    2016-04-01

    Sentiment analysis (also known as opinion mining) refers to the use of text analysis and computational linguistics to identify and extract subjective information in source materials. Sentiment analysis is widely applied to reviews discussion that is being talked in social media for many purposes, ranging from marketing, customer service, or public opinion of public policy. One of the popular algorithm for Sentiment Analysis implementation is Kumar algorithm that developed by Kumar and Sebastian. Kumar algorithm can identify the sentiment score of the statement, sentence or tweet, but cannot determine the relationship of the object or target related to the sentiment being analysed. This research proposed solution for that challenge by adding additional component that represent object or target to the existing algorithm (Kumar algorithm). The result of this research is a modified algorithm that can give sentiment score based on a given object or target.

  8. MRI quantitative assessment of brain maturation and prognosis in premature infants using total maturation score

    International Nuclear Information System (INIS)

    Qi Ying; Wang Xiaoming

    2009-01-01

    Objective: To quantitatively assess brain maturation and prognosis in premature infants on conventional MRI using total maturation score (TMS). Methods: Nineteen cases of sequelae of white matter damage (WMD group )and 21 cases of matched controls (control group) in premature infants confirmed by MRI examinations were included in the study. All cases underwent conventional MR imaging approximately during the perinatal period after birth. Brain development was quantitatively assessed using Childs AM's validated scoring system of TMS by two sophisticated radiology physicians. Interobserver agreement and reliability was evaluated by using intraclass correlation (ICC). Linear regression analysis between TMS and postmenstrual age (PMA) was made(Y: TMS, X: PMA). Independent-sample t test of the two groups' TMS was made. Results: Sixteen of 19 cases revealed MRI abnormalities. Lesions showing T 1 and T 2 shortening tended to occur in clusters or a linear pattern in the deep white matter of the centrum semiovale, periventricular white matter. Diffusion-weighted MR image (DWI) showed 3 cases with greater lesions and 4 cases with new lesions in corpus callosum. There was no abnormality in control group on MRI and DWI. The average numbers of TMS between the two observers were 7.13±2.27, 7.13±2.21. Interobservcer agreement was found to be high (ICC=0.990, P 2 =0.6401,0.5156 respectively, P 0.05). Conclusion: Conventional MRI is able to quantify the brain maturation and prognosis of premature infants using TMS. (authors)

  9. The clinical performance of an office-based risk scoring system for fatal cardiovascular diseases in North-East of Iran.

    Directory of Open Access Journals (Sweden)

    Sadaf G Sepanlou

    Full Text Available Cardiovascular diseases (CVD are becoming major causes of death in developing countries. Risk scoring systems for CVD are needed to prioritize allocation of limited resources. Most of these risk score algorithms have been based on a long array of risk factors including blood markers of lipids. However, risk scoring systems that solely use office-based data, not including laboratory markers, may be advantageous. In the current analysis, we validated the office-based Framingham risk scoring system in Iran.The study used data from the Golestan Cohort in North-East of Iran. The following risk factors were used in the development of the risk scoring method: sex, age, body mass index, systolic blood pressure, hypertension treatment, current smoking, and diabetes. Cardiovascular risk functions for prediction of 10-year risk of fatal CVDs were developed.A total of 46,674 participants free of CVD at baseline were included. Predictive value of estimated risks was examined. The resulting Area Under the ROC Curve (AUC was 0.774 (95% CI: 0.762-0.787 in all participants, 0.772 (95% CI: 0.753-0.791 in women, and 0.763 (95% CI: 0.747-0.779 in men. AUC was higher in urban areas (0.790, 95% CI: 0.766-0.815. The predicted and observed risks of fatal CVD were similar in women. However, in men, predicted probabilities were higher than observed.The AUC in the current study is comparable to results of previous studies while lipid profile was replaced by body mass index to develop an office-based scoring system. This scoring algorithm is capable of discriminating individuals at high risk versus low risk of fatal CVD.

  10. ACCURATUM: improved calcium volume scoring using a mesh-based algorithm - a phantom study

    International Nuclear Information System (INIS)

    Saur, Stefan C.; Szekely, Gabor; Alkadhi, Hatem; Desbiolles, Lotus; Cattin, Philippe C.

    2009-01-01

    To overcome the limitations of the classical volume scoring method for quantifying coronary calcifications, including accuracy, variability between examinations, and dependency on plaque density and acquisition parameters, a mesh-based volume measurement method has been developed. It was evaluated and compared with the classical volume scoring method for accuracy, i.e., the normalized volume (measured volume/ground-truthed volume), and for variability between examinations (standard deviation of accuracy). A cardiac computed-tomography (CT) phantom containing various cylindrical calcifications was scanned using different tube voltages and reconstruction kernels, at various positions and orientations on the CT table and using different slice thicknesses. Mean accuracy for all plaques was significantly higher (p<0.0001) for the proposed method (1.220±0.507) than for the classical volume score (1.896±1.095). In contrast to the classical volume score, plaque density (p=0.84), reconstruction kernel (p=0.19), and tube voltage (p=0.27) had no impact on the accuracy of the developed method. In conclusion, the method presented herein is more accurate than classical calcium scoring and is less dependent on tube voltage, reconstruction kernel, and plaque density. (orig.)

  11. Ripasa score: a new diagnostic score for diagnosis of acute appendicitis

    International Nuclear Information System (INIS)

    Butt, M.Q.

    2014-01-01

    Objective: To determine the usefulness of RIPASA score for the diagnosis of acute appendicitis using histopathology as a gold standard. Study Design: Cross-sectional study. Place and Duration of Study: Department of General Surgery, Combined Military Hospital, Kohat, from September 2011 to March 2012. Methodology: A total of 267 patients were included in this study. RIPASA score was assessed. The diagnosis of appendicitis was made clinically aided by routine sonography of abdomen. After appendicectomies, resected appendices were sent for histopathological examination. The 15 parameters and the scores generated were age (less than 40 years = 1 point; greater than 40 years = 0.5 point), gender (male = 1 point; female = 0.5 point), Right Iliac Fossa (RIF) pain (0.5 point), migration of pain to RIF (0.5 point), nausea and vomiting (1 point), anorexia (1 point), duration of symptoms (less than 48 hours = 1 point; more than 48 hours = 0.5 point), RIF tenderness (1 point), guarding (2 points), rebound tenderness (1 point), Rovsing's sign (2 points), fever (1 point), raised white cell count (1 point), negative urinalysis (1 point) and foreign national registration identity card (1 point). The optimal cut-off threshold score from the ROC was 7.5. Sensitivity analysis was done. Results: Out of 267 patients, 156 (58.4%) were male while remaining 111 patients (41.6%) were female with mean age of 23.5 +- 9.1 years. Sensitivity of RIPASA score was 96.7%, specificity 93.0%, diagnostic accuracy was 95.1%, positive predictive value was 94.8% and negative predictive value was 95.54%. Conclusion: RIPASA score at a cut-off total score of 7.5 was a useful tool to diagnose appendicitis, in equivocal cases of pain. (author)

  12. Can an arthroplasty risk score predict bundled care events after total joint arthroplasty?

    Directory of Open Access Journals (Sweden)

    Blair S. Ashley, MD

    2018-03-01

    Full Text Available Background: The validated Arthroplasty Risk Score (ARS predicts the need for postoperative triage to an intensive care setting. We hypothesized that the ARS may also predict hospital length of stay (LOS, discharge disposition, and episode-of-care cost (EOCC. Methods: We retrospectively reviewed a series of 704 patients undergoing primary total hip and knee arthroplasty over 17 months. Patient characteristics, 90-day EOCC, LOS, and readmission rates were compared before and after ARS implementation. Results: ARS implementation was associated with fewer patients going to a skilled nursing or rehabilitation facility after discharge (63% vs 74%, P = .002. There was no difference in LOS, EOCC, readmission rates, or complications. While the adoption of the ARS did not change the mean EOCC, ARS >3 was predictive of high EOCC outlier (odds ratio 2.65, 95% confidence interval 1.40-5.01, P = .003. Increased ARS correlated with increased EOCC (P = .003. Conclusions: Implementation of the ARS was associated with increased disposition to home. It was predictive of high EOCC and should be considered in risk adjustment variables in alternative payment models. Keywords: Bundled payments, Risk stratification, Arthroplasty

  13. Unsupervised online classifier in sleep scoring for sleep deprivation studies.

    Science.gov (United States)

    Libourel, Paul-Antoine; Corneyllie, Alexandra; Luppi, Pierre-Hervé; Chouvet, Guy; Gervasoni, Damien

    2015-05-01

    This study was designed to evaluate an unsupervised adaptive algorithm for real-time detection of sleep and wake states in rodents. We designed a Bayesian classifier that automatically extracts electroencephalogram (EEG) and electromyogram (EMG) features and categorizes non-overlapping 5-s epochs into one of the three major sleep and wake states without any human supervision. This sleep-scoring algorithm is coupled online with a new device to perform selective paradoxical sleep deprivation (PSD). Controlled laboratory settings for chronic polygraphic sleep recordings and selective PSD. Ten adult Sprague-Dawley rats instrumented for chronic polysomnographic recordings. The performance of the algorithm is evaluated by comparison with the score obtained by a human expert reader. Online detection of PS is then validated with a PSD protocol with duration of 72 hours. Our algorithm gave a high concordance with human scoring with an average κ coefficient > 70%. Notably, the specificity to detect PS reached 92%. Selective PSD using real-time detection of PS strongly reduced PS amounts, leaving only brief PS bouts necessary for the detection of PS in EEG and EMG signals (4.7 ± 0.7% over 72 h, versus 8.9 ± 0.5% in baseline), and was followed by a significant PS rebound (23.3 ± 3.3% over 150 minutes). Our fully unsupervised data-driven algorithm overcomes some limitations of the other automated methods such as the selection of representative descriptors or threshold settings. When used online and coupled with our sleep deprivation device, it represents a better option for selective PSD than other methods like the tedious gentle handling or the platform method. © 2015 Associated Professional Sleep Societies, LLC.

  14. From scores to face templates: a model-based approach.

    Science.gov (United States)

    Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar

    2007-12-01

    Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With

  15. A Chemical Risk Ranking and Scoring Method for the Selection of Harmful Substances to be Specially Controlled in Occupational Environments

    Science.gov (United States)

    Shin, Saemi; Moon, Hyung-Il; Lee, Kwon Seob; Hong, Mun Ki; Byeon, Sang-Hoon

    2014-01-01

    This study aimed to devise a method for prioritizing hazardous chemicals for further regulatory action. To accomplish this objective, we chose appropriate indicators and algorithms. Nine indicators from the Globally Harmonized System of Classification and Labeling of Chemicals were used to identify categories to which the authors assigned numerical scores. Exposure indicators included handling volume, distribution, and exposure level. To test the method devised by this study, sixty-two harmful substances controlled by the Occupational Safety and Health Act in Korea, including acrylamide, acrylonitrile, and styrene were ranked using this proposed method. The correlation coefficients between total score and each indicator ranged from 0.160 to 0.641, and those between total score and hazard indicators ranged from 0.603 to 0.641. The latter were higher than the correlation coefficients between total score and exposure indicators, which ranged from 0.160 to 0.421. Correlations between individual indicators were low (−0.240 to 0.376), except for those between handling volume and distribution (0.613), suggesting that each indicator was not strongly correlated. The low correlations between each indicator mean that the indicators and independent and were well chosen for prioritizing harmful chemicals. This method proposed by this study can improve the cost efficiency of chemical management as utilized in occupational regulatory systems. PMID:25419874

  16. A Chemical Risk Ranking and Scoring Method for the Selection of Harmful Substances to be Specially Controlled in Occupational Environments

    Directory of Open Access Journals (Sweden)

    Saemi Shin

    2014-11-01

    Full Text Available This study aimed to devise a method for prioritizing hazardous chemicals for further regulatory action. To accomplish this objective, we chose appropriate indicators and algorithms. Nine indicators from the Globally Harmonized System of Classification and Labeling of Chemicals were used to identify categories to which the authors assigned numerical scores. Exposure indicators included handling volume, distribution, and exposure level. To test the method devised by this study, sixty-two harmful substances controlled by the Occupational Safety and Health Act in Korea, including acrylamide, acrylonitrile, and styrene were ranked using this proposed method. The correlation coefficients between total score and each indicator ranged from 0.160 to 0.641, and those between total score and hazard indicators ranged from 0.603 to 0.641. The latter were higher than the correlation coefficients between total score and exposure indicators, which ranged from 0.160 to 0.421. Correlations between individual indicators were low (−0.240 to 0.376, except for those between handling volume and distribution (0.613, suggesting that each indicator was not strongly correlated. The low correlations between each indicator mean that the indicators and independent and were well chosen for prioritizing harmful chemicals. This method proposed by this study can improve the cost efficiency of chemical management as utilized in occupational regulatory systems.

  17. Intra- and inter-rater reliability of the Knee Society Knee Score when used by two physiotherapists in patients post total knee arthroplasty

    Directory of Open Access Journals (Sweden)

    S. Gopal

    2010-01-01

    Full Text Available Background and Purpose: It has yet to be shown whether routine physiotherapy plays a role in the rehabilitation of patients post totalknee arthroplasty (Rajan et al 2004. Physiotherapists should be using validoutcome measures to provide evidence of the benefit of their intervention. The aim of this study was to establish the intra and inter-rater reliability of the Knee Society Knee Score, a scoring system developed by Insall et al(1989. The Knee Society Knee Score can be used to assess the integrity of theknee joint of patients undergoing total knee arthroplasty. Since the scoreinvolves clinical testing, the intra-rater reliability of the clinician should be established prior to using the scores as datain clinical research. W here multiple clinicians are involved, inter-rater reliability should also be established.Design: This was a correlation study.Subjects: A  sample of thirty patients post total knee arthroplasty attending the arthroplasty clinic at Johannesburg Hospital between six weeks and twelve months postoperatively.M ethod: Recruited patients were evaluated twice with a time interval of one hour between each assessment. Statistical A nalysis: The intra- and inter-rater reliability were estimated using Intraclass Correlation Coefficient (ICC. R esults: The intra-rater reliability showed excellent reliability (h= 0.95 for Examiner A  and good reliability (h= 0.71for Examiner B. The inter-rater reliability showed moderate reliability (h= 0.67 during test one and h= 0.66 during test two.Conclusion: The KSKS has good intra-rater reliability when tested within a period of one hour. The KSKS demonstrated moderate agreement for inter rater reliability.

  18. Increased discordance between HeartScore and coronary artery calcification score after introduction of the new ESC prevention guidelines

    DEFF Research Database (Denmark)

    Diederichsen, Axel C P; Mahabadi, Amir-Abbas; Gerke, Oke

    2015-01-01

    -contrast Cardiac-CT scan was performed to detect coronary artery calcification (CAC). RESULTS: Agreement of HeartScore risk groups with CAC groups was poor, but higher when applying the algorithm for the low-risk compared to the high-risk country model (agreement rate: 77% versus 63%, and weighted Kappa: 0...

  19. Combination of scoring schemes for protein docking

    Directory of Open Access Journals (Sweden)

    Schomburg Dietmar

    2007-08-01

    Full Text Available Abstract Background Docking algorithms are developed to predict in which orientation two proteins are likely to bind under natural conditions. The currently used methods usually consist of a sampling step followed by a scoring step. We developed a weighted geometric correlation based on optimised atom specific weighting factors and combined them with our previously published amino acid specific scoring and with a comprehensive SVM-based scoring function. Results The scoring with the atom specific weighting factors yields better results than the amino acid specific scoring. In combination with SVM-based scoring functions the percentage of complexes for which a near native structure can be predicted within the top 100 ranks increased from 14% with the geometric scoring to 54% with the combination of all scoring functions. Especially for the enzyme-inhibitor complexes the results of the ranking are excellent. For half of these complexes a near-native structure can be predicted within the first 10 proposed structures and for more than 86% of all enzyme-inhibitor complexes within the first 50 predicted structures. Conclusion We were able to develop a combination of different scoring schemes which considers a series of previously described and some new scoring criteria yielding a remarkable improvement of prediction quality.

  20. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    Science.gov (United States)

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  1. Key performance indicators score (KPIs-score) based on clinical and laboratorial parameters can establish benchmarks for internal quality control in an ART program.

    Science.gov (United States)

    Franco, José G; Petersen, Claudia G; Mauri, Ana L; Vagnini, Laura D; Renzi, Adriana; Petersen, Bruna; Mattila, M C; Comar, Vanessa A; Ricci, Juliana; Dieamant, Felipe; Oliveira, João Batista A; Baruffi, Ricardo L R

    2017-06-01

    KPIs have been employed for internal quality control (IQC) in ART. However, clinical KPIs (C-KPIs) such as age, AMH and number of oocytes collected are never added to laboratory KPIs (L-KPIs), such as fertilization rate and morphological quality of the embryos for analysis, even though the final endpoint is the evaluation of clinical pregnancy rates. This paper analyzed if a KPIs-score strategy with clinical and laboratorial parameters could be used to establish benchmarks for IQC in ART cycles. In this prospective cohort study, 280 patients (36.4±4.3years) underwent ART. The total KPIs-score was obtained by the analysis of age, AMH (AMH Gen II ELISA/pre-mixing modified, Beckman Coulter Inc.), number of metaphase-II oocytes, fertilization rates and morphological quality of the embryonic lot. The total KPIs-score (C-KPIs+L-KPIs) was correlated with the presence or absence of clinical pregnancy. The relationship between the C-KPIs and L-KPIs scores was analyzed to establish quality standards, to increase the performance of clinical and laboratorial processes in ART. The logistic regression model (LRM), with respect to pregnancy and total KPIs-score (280 patients/102 clinical pregnancies), yielded an odds ratio of 1.24 (95%CI = 1.16-1.32). There was also a significant difference (pclinical pregnancies (total KPIs-score=20.4±3.7) and the group without clinical pregnancies (total KPIs-score=15.9±5). Clinical pregnancy probabilities (CPP) can be obtained using the LRM (prediction key) with the total KPIs-score as a predictor variable. The mean C-KPIs and L-KPIs scores obtained in the pregnancy group were 11.9±2.9 and 8.5±1.7, respectively. Routinely, in all cases where the C-KPIs score was ≥9, after the procedure, the L-KPIs score obtained was ≤6, a revision of the laboratory procedure was performed to assess quality standards. This total KPIs-score could set up benchmarks for clinical pregnancy. Moreover, IQC can use C-KPIs and L-KPIs scores to detect problems

  2. Achilles tendon Total Rupture Score at 3 months can predict patients' ability to return to sport 1 year after injury

    DEFF Research Database (Denmark)

    Hansen, Maria Swennergren; Christensen, Marianne; Budolfsen, Thomas

    2016-01-01

    PURPOSE: To investigate how the Achilles tendon Total Rupture Score (ATRS) at 3 months and 1 year after injury is associated with a patient's ability to return to work and sports as well as to investigate whether sex and age influence ATRS after 3 months and 1 year. METHOD: This is a retrospectiv...

  3. A Greedy Algorithm for Neighborhood Overlap-Based Community Detection

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2016-01-01

    Full Text Available The neighborhood overlap (NOVER of an edge u-v is defined as the ratio of the number of nodes who are neighbors for both u and v to that of the number of nodes who are neighbors of at least u or v. In this paper, we hypothesize that an edge u-v with a lower NOVER score bridges two or more sets of vertices, with very few edges (other than u-v connecting vertices from one set to another set. Accordingly, we propose a greedy algorithm of iteratively removing the edges of a network in the increasing order of their neighborhood overlap and calculating the modularity score of the resulting network component(s after the removal of each edge. The network component(s that have the largest cumulative modularity score are identified as the different communities of the network. We evaluate the performance of the proposed NOVER-based community detection algorithm on nine real-world network graphs and compare the performance against the multi-level aggregation-based Louvain algorithm, as well as the original and time-efficient versions of the edge betweenness-based Girvan-Newman (GN community detection algorithm.

  4. Screening applicants for risk of poor academic performance: a novel scoring system using preadmission grade point averages and graduate record examination scores.

    Science.gov (United States)

    Luce, David

    2011-01-01

    The purpose of this study was to develop an effective screening tool for identifying physician assistant (PA) program applicants at highest risk for poor academic performance. Prior to reviewing applications for the class of 2009, a retrospective analysis of preadmission data took place for the classes of 2006, 2007, and 2008. A single composite score was calculated for each student who matriculated (number of subjects, N=228) incorporating the total undergraduate grade point average (UGPA), the science GPA (SGPA), and the three component Graduate Record Examination (GRE) scores: verbal (GRE-V), quantitative (GRE-Q), analytical (GRE-A). Individual applicant scores for each of the five parameters were ranked in descending quintiles. Each applicant's five quintile scores were then added, yielding a total quintile score ranging from 25, which indicated an excellent performance, to 5, which indicated poorer performance. Thirteen of the 228 students had academic difficulty (dismissal, suspension, or one-quarter on academic warning or probation). Twelve of the 13 students having academic difficulty had a preadmission total quintile score 12 (range, 6-14). In response to this descriptive analysis, when selecting applicants for the class of 2009, the admissions committee used the total quintile score for screening applicants for interviews. Analysis of correlations in preadmission, graduate, and postgraduate performance data for the classes of 2009-2013 will continue and may help identify those applicants at risk for academic difficulty. Establishing a threshold total quintile score of applicant GPA and GRE scores may significantly decrease the number of entering PA students at risk for poor academic performance.

  5. 完全图的点可区别强全染色算法%Strong Vertex-distinguishing Total Coloring Algorithm of Complete Graph

    Institute of Scientific and Technical Information of China (English)

    赵焕平; 刘平; 李敬文

    2012-01-01

    According to the definition of strong vertex-distinguishing total coloring, this paper combines with the symmetry of complete graph, proposes a new strong vertex-distinguishing total coloring algorithm. The algorithm divides the filled colors into two parts: overcolor and propercolor. At the premise of getting the coloring number and the coloring frequency, it uses colored at first to enhance its convergence. Experimental results show that this algorithm has a lower time complexity.%根据图的点可区别全染色的定义,结合完全图的对称性,提出一种新的点可区别强全染色算法.该算法将需要填充的颜色分为超色数和正常色数2个部分,在得到染色数量和染色次数的前提下,对超色数进行染色以增强算法收敛性.实验结果表明,该算法具有较低的时间复杂度.

  6. Normed kernel function-based fuzzy possibilistic C-means (NKFPCM) algorithm for high-dimensional breast cancer database classification with feature selection is based on Laplacian Score

    Science.gov (United States)

    Lestari, A. W.; Rustam, Z.

    2017-07-01

    In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.

  7. Near infrared system coupled chemometric algorithms for enumeration of total fungi count in cocoa beans neat solution.

    Science.gov (United States)

    Kutsanedzie, Felix Y H; Chen, Quansheng; Hassan, Md Mehedi; Yang, Mingxiu; Sun, Hao; Rahman, Md Hafizur

    2018-02-01

    Total fungi count (TFC) is a quality indicator of cocoa beans when unmonitored leads to quality and safety problems. Fourier transform near infrared spectroscopy (FT-NIRS) combined with chemometric algorithms like partial least square (PLS); synergy interval-PLS (Si-PLS); synergy interval-genetic algorithm-PLS (Si-GAPLS); Ant colony optimization - PLS (ACO-PLS) and competitive-adaptive reweighted sampling-PLS (CARS-PLS) was employed to predict TFC in cocoa beans neat solution. Model results were evaluated using the correlation coefficients of the prediction (Rp) and calibration (Rc); root mean square error of prediction (RMSEP), and the ratio of sample standard deviation to RMSEP (RPD). The developed models performance yielded 0.951≤Rp≤0.975; and 3.15≤RPD≤4.32. The models' prediction stability improved in the order of PLS

  8. Diagnosis of bacterial vaginosis in a rural setup: Comparison of clinical algorithm, smear scoring and culture by semiquantitative technique

    OpenAIRE

    Rao P; Devi S; Shriyan A; Rajaram M; Jagdishchandra K

    2004-01-01

    This study was undertaken to estimate the prevalence of bacterial vaginosis (BV) and other sexually transmitted infections (STIs) in a rural set up and compare the smear scoring system to that of culture by semiquantitative technique. A total of 505 married women, who were in sexually active age group of 15-44 years, were selected from three different villages. High vaginal swabs, endocervical swabs, vaginal discharge and blood were collected and processed in the laboratory. Overall prevalenc...

  9. The ERICE-score: the new native cardiovascular score for the low-risk and aged Mediterranean population of Spain.

    Science.gov (United States)

    Gabriel, Rafael; Brotons, Carlos; Tormo, M José; Segura, Antonio; Rigo, Fernando; Elosua, Roberto; Carbayo, Julio A; Gavrila, Diana; Moral, Irene; Tuomilehto, Jaakko; Muñiz, Javier

    2015-03-01

    In Spain, data based on large population-based cohorts adequate to provide an accurate prediction of cardiovascular risk have been scarce. Thus, calibration of the EuroSCORE and Framingham scores has been proposed and done for our population. The aim was to develop a native risk prediction score to accurately estimate the individual cardiovascular risk in the Spanish population. Seven Spanish population-based cohorts including middle-aged and elderly participants were assembled. There were 11800 people (6387 women) representing 107915 person-years of follow-up. A total of 1214 cardiovascular events were identified, of which 633 were fatal. Cox regression analyses were conducted to examine the contributions of the different variables to the 10-year total cardiovascular risk. Age was the strongest cardiovascular risk factor. High systolic blood pressure, diabetes mellitus and smoking were strong predictive factors. The contribution of serum total cholesterol was small. Antihypertensive treatment also had a significant impact on cardiovascular risk, greater in men than in women. The model showed a good discriminative power (C-statistic=0.789 in men and C=0.816 in women). Ten-year risk estimations are displayed graphically in risk charts separately for men and women. The ERICE is a new native cardiovascular risk score for the Spanish population derived from the background and contemporaneous risk of several Spanish cohorts. The ERICE score offers the direct and reliable estimation of total cardiovascular risk, taking in consideration the effect of diabetes mellitus and cardiovascular risk factor management. The ERICE score is a practical and useful tool for clinicians to estimate the total individual cardiovascular risk in Spain. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  10. Parallel Evolutionary Optimization Algorithms for Peptide-Protein Docking

    Science.gov (United States)

    Poluyan, Sergey; Ershov, Nikolay

    2018-02-01

    In this study we examine the possibility of using evolutionary optimization algorithms in protein-peptide docking. We present the main assumptions that reduce the docking problem to a continuous global optimization problem and provide a way of using evolutionary optimization algorithms. The Rosetta all-atom force field was used for structural representation and energy scoring. We describe the parallelization scheme and MPI/OpenMP realization of the considered algorithms. We demonstrate the efficiency and the performance for some algorithms which were applied to a set of benchmark tests.

  11. Quasi-supervised scoring of human sleep in polysomnograms using augmented input variables.

    Science.gov (United States)

    Yaghouby, Farid; Sunderam, Sridhar

    2015-04-01

    The limitations of manual sleep scoring make computerized methods highly desirable. Scoring errors can arise from human rater uncertainty or inter-rater variability. Sleep scoring algorithms either come as supervised classifiers that need scored samples of each state to be trained, or as unsupervised classifiers that use heuristics or structural clues in unscored data to define states. We propose a quasi-supervised classifier that models observations in an unsupervised manner but mimics a human rater wherever training scores are available. EEG, EMG, and EOG features were extracted in 30s epochs from human-scored polysomnograms recorded from 42 healthy human subjects (18-79 years) and archived in an anonymized, publicly accessible database. Hypnograms were modified so that: 1. Some states are scored but not others; 2. Samples of all states are scored but not for transitional epochs; and 3. Two raters with 67% agreement are simulated. A framework for quasi-supervised classification was devised in which unsupervised statistical models-specifically Gaussian mixtures and hidden Markov models--are estimated from unlabeled training data, but the training samples are augmented with variables whose values depend on available scores. Classifiers were fitted to signal features incorporating partial scores, and used to predict scores for complete recordings. Performance was assessed using Cohen's Κ statistic. The quasi-supervised classifier performed significantly better than an unsupervised model and sometimes as well as a completely supervised model despite receiving only partial scores. The quasi-supervised algorithm addresses the need for classifiers that mimic scoring patterns of human raters while compensating for their limitations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Preliminary application of a novel algorithm to monitor changes in pre-flight total peripheral resistance for prediction of post-flight orthostatic intolerance in astronauts

    Science.gov (United States)

    Arai, Tatsuya; Lee, Kichang; Stenger, Michael B.; Platts, Steven H.; Meck, Janice V.; Cohen, Richard J.

    2011-04-01

    Orthostatic intolerance (OI) is a significant challenge for astronauts after long-duration spaceflight. Depending on flight duration, 20-80% of astronauts suffer from post-flight OI, which is associated with reduced vascular resistance. This paper introduces a novel algorithm for continuously monitoring changes in total peripheral resistance (TPR) by processing the peripheral arterial blood pressure (ABP). To validate, we applied our novel mathematical algorithm to the pre-flight ABP data previously recorded from twelve astronauts ten days before launch. The TPR changes were calculated by our algorithm and compared with the TPR value estimated using cardiac output/heart rate before and after phenylephrine administration. The astronauts in the post-flight presyncopal group had lower pre-flight TPR changes (1.66 times) than those in the non-presyncopal group (2.15 times). The trend in TPR changes calculated with our algorithm agreed with the TPR trend calculated using measured cardiac output in the previous study. Further data collection and algorithm refinement are needed for pre-flight detection of OI and monitoring of continuous TPR by analysis of peripheral arterial blood pressure.

  13. Indications for MARS-MRI in Patients Treated With Articular Surface Replacement XL Total Hip Arthroplasty.

    Science.gov (United States)

    Connelly, James W; Galea, Vincent P; Laaksonen, Inari; Matuszak, Sean J; Madanat, Rami; Muratoglu, Orhun; Malchau, Henrik

    2018-04-19

    The purpose of this study was to identify which patient and clinical factors are predictive of adverse local tissue reaction (ALTR) and to use these factors to create a highly sensitive algorithm for indicating metal artifact reduction sequence magnetic resonance imaging (MARS-MRI) in Articular Surface Replacement (ASR) XL total hip arthroplasty patients. Our secondary aim was to compare our algorithm to existing national guidelines on when to take MARS-MRI in metal-on-metal total hip arthroplasty patients. The study consisted of 137 patients treated with unilateral ASR XL implants from a prospective, multicenter study. Patients underwent MARS-MRI regardless of clinical presentation at a mean of 6.2 (range, 3.3-10.4) years from surgery. Univariate and multivariate analyses were conducted to determine which variables were predictive of ALTR. Predictors were used to create an algorithm to indicate MARS-MRI. Finally, we compared our algorithm's ability to detect ALTR to existing guidelines. We found a visual analog scale pain score ≥2 (odds ratio [OR] = 2.53; P = .023), high blood cobalt (OR = 1.05; P = .023), and male gender (OR = 2.37; P = .034) to be significant predictors of ALTR presence in our cohort. The resultant algorithm achieved 86.4% sensitivity and 60.2% specificity in detecting ALTR within our cohort. Our algorithm had the highest area under the curve and was the only guideline that was significantly predictive of ALTR (P = .014). Our algorithm including patient-reported pain and sex-specific cutoffs for blood cobalt levels could predict ALTR and indicate MARS-MRI in our cohort of ASR XL metal-on-metal patients with high sensitivity. Level II, diagnostic study. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  15. Prospective validation of a near real-time EHR-integrated automated SOFA score calculator.

    Science.gov (United States)

    Aakre, Christopher; Franco, Pablo Moreno; Ferreyra, Micaela; Kitson, Jaben; Li, Man; Herasevich, Vitaly

    2017-07-01

    We created an algorithm for automated Sequential Organ Failure Assessment (SOFA) score calculation within the Electronic Health Record (EHR) to facilitate detection of sepsis based on the Third International Consensus Definitions for Sepsis and Septic Shock (SEPSIS-3) clinical definition. We evaluated the accuracy of near real-time and daily automated SOFA score calculation compared with manual score calculation. Automated SOFA scoring computer programs were developed using available EHR data sources and integrated into a critical care focused patient care dashboard at Mayo Clinic in Rochester, Minnesota. We prospectively compared the accuracy of automated versus manual calculation for a sample of patients admitted to the medical intensive care unit at Mayo Clinic Hospitals in Rochester, Minnesota and Jacksonville, Florida. Agreement was calculated with Cohen's kappa statistic. Reason for discrepancy was tabulated during manual review. Random spot check comparisons were performed 134 times on 27 unique patients, and daily SOFA score comparisons were performed for 215 patients over a total of 1206 patient days. Agreement between automatically scored and manually scored SOFA components for both random spot checks (696 pairs, κ=0.89) and daily calculation (5972 pairs, κ=0.89) was high. The most common discrepancies were in the respiratory component (inaccurate fraction of inspired oxygen retrieval; 200/1206) and creatinine (normal creatinine in patients with no urine output on dialysis; 128/1094). 147 patients were at risk of developing sepsis after intensive care unit admission, 10 later developed sepsis confirmed by chart review. All were identified before onset of sepsis with the ΔSOFA≥2 point criterion and 46 patients were false-positives. Near real-time automated SOFA scoring was found to have strong agreement with manual score calculation and may be useful for the detection of sepsis utilizing the new SEPSIS-3 definition. Copyright © 2017 Elsevier B.V. All

  16. Validation of the total dysphagia risk score (TDRS) in head and neck cancer patients in a conventional and a partially accelerated radiotherapy scheme

    NARCIS (Netherlands)

    Nevens, Daan; Deschuymer, Sarah; Langendijk, Johannes A.; Daisne, Jean -Francois; Duprez, Frederic; De Neve, Wilfried; Nuyts, Sandra

    Background and purpose: A risk model, the total dysphagia risk score (TDRS), was developed to predict which patients are most at risk to develop grade >= 2 dysphagia at 6 months following radiotherapy (RT) for head and neck cancer. The purpose of this study was to validate this model at 6 months and

  17. Algorithmic approach to patients presenting with heartburn and epigastric pain refractory to empiric proton pump inhibitor therapy.

    Science.gov (United States)

    Roorda, Andrew K; Marcus, Samuel N; Triadafilopoulos, George

    2011-10-01

    Reflux-like dyspepsia (RLD), where predominant epigastric pain is associated with heartburn and/or regurgitation, is a common clinical syndrome in both primary and specialty care. Because symptom frequency and severity vary, overlap among gastroesophageal reflux disease (GERD), non-erosive reflux disease (NERD), and RLD, is quite common. The chronic and recurrent nature of RLD and its variable response to proton pump inhibitor (PPI) therapy remain problematic. To examine the prevalence of GERD, NERD, and RLD in a community setting using an algorithmic approach and to assess the potential, reproducibility, and validity of a multi-factorial scoring system in discriminating patients with RLD from those with GERD or NERD. Using a novel algorithmic approach, we evaluated an outpatient, community-based cohort referred to a gastroenterologist because of epigastric pain and heartburn that were only partially relieved by PPI. After an initial symptom evaluation (for epigastric pain, heartburn, regurgitation, dysphagia), an endoscopy and distal esophageal biopsies were performed, followed by esophageal motility and 24-h ambulatory pH monitoring to assess esophageal function and pathological acid exposure. A scoring system based on presence of symptoms and severity of findings was devised. Data was collected in two stages: subjects in the first stage were designated as the derivation cohort; subjects in the second stage were labeled the validation cohort. The total cohort comprised 159 patients (59 males, 100 females; mean age 52). On endoscopy, 30 patients (19%) had complicated esophagitis (CE) and 11 (7%) had Barrett's esophagus (BE) and were classified collectively as patients with GERD. One-hundred and eighteen (74%) patients had normal esophagus. Of these, 94 (59%) had one or more of the following: hiatal hernia, positive biopsy, abnormal pH, and/or abnormal motility studies and were classified as patients with NERD. The remaining 24 patients (15%) had normal functional

  18. Sparse Learning of the Disease Severity Score for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Ivan Stojkovic

    2017-01-01

    Full Text Available Learning disease severity scores automatically from collected measurements may aid in the quality of both healthcare and scientific understanding. Some steps in that direction have been taken and machine learning algorithms for extracting scoring functions from data have been proposed. Given the rapid increase in both quantity and diversity of data measured and stored, the large amount of information is becoming one of the challenges for learning algorithms. In this work, we investigated the direction of the problem where the dimensionality of measured variables is large. Learning the severity score in such cases brings the issue of which of measured features are relevant. We have proposed a novel approach by combining desirable properties of existing formulations, which compares favorably to alternatives in accuracy and especially in the robustness of the learned scoring function. The proposed formulation has a nonsmooth penalty that induces sparsity. This problem is solved by addressing a dual formulation which is smooth and allows an efficient optimization. The proposed approach might be used as an effective and reliable tool for both scoring function learning and biomarker discovery, as demonstrated by identifying a stable set of genes related to influenza symptoms’ severity, which are enriched in immune-related processes.

  19. Application of a rule extraction algorithm family based on the Re-RX algorithm to financial credit risk assessment from a Pareto optimal perspective

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    2016-01-01

    Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.

  20. Comparison of turnaround time and total cost of HIV testing before and after implementation of the 2014 CDC/APHL Laboratory Testing Algorithm for diagnosis of HIV infection.

    Science.gov (United States)

    Chen, Derrick J; Yao, Joseph D

    2017-06-01

    Updated recommendations for HIV diagnostic laboratory testing published by the Centers for Disease Control and Prevention and the Association of Public Health Laboratories incorporate 4th generation HIV immunoassays, which are capable of identifying HIV infection prior to seroconversion. The purpose of this study was to compare turnaround time and cost between 3rd and 4th generation HIV immunoassay-based testing algorithms for initially reactive results. The clinical microbiology laboratory database at Mayo Clinic, Rochester, MN was queried for 3rd generation (from November 2012 to May 2014) and 4th generation (from May 2014 to November 2015) HIV immunoassay results. All results from downstream supplemental testing were recorded. Turnaround time (defined as the time of initial sample receipt in the laboratory to the time the final supplemental test in the algorithm was resulted) and cost (based on 2016 Medicare reimbursement rates) were assessed. A total of 76,454 and 78,998 initial tests were performed during the study period using the 3rd generation and 4th generation HIV immunoassays, respectively. There were 516 (0.7%) and 581 (0.7%) total initially reactive results, respectively. Of these, 304 (58.9%) and 457 (78.7%) were positive by supplemental testing. There were 10 (0.01%) cases of acute HIV infection identified with the 4th generation algorithm. The most frequent tests performed to confirm an HIV-positive case using the 3rd generation algorithm, which were reactive initial immunoassay and positive HIV-1 Western blot, took a median time of 1.1 days to complete at a cost of $45.00. In contrast, the most frequent tests performed to confirm an HIV-positive case using the 4th generation algorithm, which included a reactive initial immunoassay and positive HIV-1/-2 antibody differentiation immunoassay for HIV-1, took a median time of 0.4 days and cost $63.25. Overall median turnaround time was 2.2 and 1.5 days, and overall median cost was $63.90 and $72.50 for

  1. An Objective Fluctuation Score for Parkinson's Disease

    Science.gov (United States)

    Horne, Malcolm K.; McGregor, Sarah; Bergquist, Filip

    2015-01-01

    Introduction Establishing the presence and severity of fluctuations is important in managing Parkinson’s Disease yet there is no reliable, objective means of doing this. In this study we have evaluated a Fluctuation Score derived from variations in dyskinesia and bradykinesia scores produced by an accelerometry based system. Methods The Fluctuation Score was produced by summing the interquartile range of bradykinesia scores and dyskinesia scores produced every 2 minutes between 0900-1800 for at least 6 days by the accelerometry based system and expressing it as an algorithm. Results This Score could distinguish between fluctuating and non-fluctuating patients with high sensitivity and selectivity and was significant lower following activation of deep brain stimulators. The scores following deep brain stimulation lay in a band just above the score separating fluctuators from non-fluctuators, suggesting a range representing adequate motor control. When compared with control subjects the score of newly diagnosed patients show a loss of fluctuation with onset of PD. The score was calculated in subjects whose duration of disease was known and this showed that newly diagnosed patients soon develop higher scores which either fall under or within the range representing adequate motor control or instead go on to develop more severe fluctuations. Conclusion The Fluctuation Score described here promises to be a useful tool for identifying patients whose fluctuations are progressing and may require therapeutic changes. It also shows promise as a useful research tool. Further studies are required to more accurately identify therapeutic targets and ranges. PMID:25928634

  2. Standardized Total Average Toxicity Score: A Scale- and Grade-Independent Measure of Late Radiotherapy Toxicity to Facilitate Pooling of Data From Different Studies

    Energy Technology Data Exchange (ETDEWEB)

    Barnett, Gillian C., E-mail: gillbarnett@doctors.org.uk [University of Cambridge Department of Oncology, Oncology Centre, Cambridge (United Kingdom); Cancer Research-UK Centre for Genetic Epidemiology and Department of Oncology, Strangeways Research Laboratories, Cambridge (United Kingdom); West, Catharine M.L. [School of Cancer and Enabling Sciences, Manchester Academic Health Science Centre, University of Manchester, Christie Hospital, Manchester (United Kingdom); Coles, Charlotte E. [University of Cambridge Department of Oncology, Oncology Centre, Cambridge (United Kingdom); Pharoah, Paul D.P. [Cancer Research-UK Centre for Genetic Epidemiology and Department of Oncology, Strangeways Research Laboratories, Cambridge (United Kingdom); Talbot, Christopher J. [Department of Genetics, University of Leicester, Leicester (United Kingdom); Elliott, Rebecca M. [School of Cancer and Enabling Sciences, Manchester Academic Health Science Centre, University of Manchester, Christie Hospital, Manchester (United Kingdom); Tanteles, George A. [Department of Clinical Genetics, University Hospitals of Leicester, Leicester (United Kingdom); Symonds, R. Paul [Department of Cancer Studies and Molecular Medicine, University Hospitals of Leicester, Leicester (United Kingdom); Wilkinson, Jennifer S. [University of Cambridge Department of Oncology, Oncology Centre, Cambridge (United Kingdom); Dunning, Alison M. [Cancer Research-UK Centre for Genetic Epidemiology and Department of Oncology, Strangeways Research Laboratories, Cambridge (United Kingdom); Burnet, Neil G. [University of Cambridge Department of Oncology, Oncology Centre, Cambridge (United Kingdom); Bentzen, Soren M. [University of Wisconsin, School of Medicine and Public Health, Department of Human Oncology, Madison, WI (United States)

    2012-03-01

    Purpose: The search for clinical and biologic biomarkers associated with late radiotherapy toxicity is hindered by the use of multiple and different endpoints from a variety of scoring systems, hampering comparisons across studies and pooling of data. We propose a novel metric, the Standardized Total Average Toxicity (STAT) score, to try to overcome these difficulties. Methods and Materials: STAT scores were derived for 1010 patients from the Cambridge breast intensity-modulated radiotherapy trial and 493 women from University Hospitals of Leicester. The sensitivity of the STAT score to detect differences between patient groups, stratified by factors known to influence late toxicity, was compared with that of individual endpoints. Analysis of residuals was used to quantify the effect of these covariates. Results: In the Cambridge cohort, STAT scores detected differences (p < 0.00005) between patients attributable to breast volume, surgical specimen weight, dosimetry, acute toxicity, radiation boost to tumor bed, postoperative infection, and smoking (p < 0.0002), with no loss of sensitivity over individual toxicity endpoints. Diabetes (p = 0.017), poor postoperative surgical cosmesis (p = 0.0036), use of chemotherapy (p = 0.0054), and increasing age (p = 0.041) were also associated with increased STAT score. When the Cambridge and Leicester datasets were combined, STAT was associated with smoking status (p < 0.00005), diabetes (p = 0.041), chemotherapy (p = 0.0008), and radiotherapy boost (p = 0.0001). STAT was independent of the toxicity scale used and was able to deal with missing data. There were correlations between residuals of the STAT score obtained using different toxicity scales (r > 0.86, p < 0.00005 for both datasets). Conclusions: The STAT score may be used to facilitate the analysis of overall late radiation toxicity, from multiple trials or centers, in studies of possible genetic and nongenetic determinants of radiotherapy toxicity.

  3. Automatic ECG quality scoring methodology: mimicking human annotators

    International Nuclear Information System (INIS)

    Johannesen, Lars; Galeotti, Loriano

    2012-01-01

    An algorithm to determine the quality of electrocardiograms (ECGs) can enable inexperienced nurses and paramedics to record ECGs of sufficient diagnostic quality. Previously, we proposed an algorithm for determining if ECG recordings are of acceptable quality, which was entered in the PhysioNet Challenge 2011. In the present work, we propose an improved two-step algorithm, which first rejects ECGs with macroscopic errors (signal absent, large voltage shifts or saturation) and subsequently quantifies the noise (baseline, powerline or muscular noise) on a continuous scale. The performance of the improved algorithm was evaluated using the PhysioNet Challenge database (1500 ECGs rated by humans for signal quality). We achieved a classification accuracy of 92.3% on the training set and 90.0% on the test set. The improved algorithm is capable of detecting ECGs with macroscopic errors and giving the user a score of the overall quality. This allows the user to assess the degree of noise and decide if it is acceptable depending on the purpose of the recording. (paper)

  4. Total variation-based neutron computed tomography

    Science.gov (United States)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  5. A comparison and evaluation of five biclustering algorithms by quantifying goodness of biclusters for gene expression data

    Directory of Open Access Journals (Sweden)

    Li Li

    2012-07-01

    Full Text Available Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA were compared with each other in the cases where they were used to handle two expression datasets (GDS1620 and pathway with different dimensions in Arabidopsis thaliana (A. thaliana GO (gene ontology annotation and PPI (protein-protein interaction network were used to verify the corresponding biological significance of biclusters from the five algorithms. To compare the algorithms’ performance and evaluate quality of identified biclusters, two scoring methods, namely weighted enrichment (WE scoring and PPI scoring, were proposed in our study. For each dataset, after combining the scores of all biclusters into one unified ranking, we could evaluate the performance and behavior of the five biclustering algorithms in a better way. Results Both WE and PPI scoring methods has been proved effective to validate biological significance of the biclusters, and a significantly positive correlation between the two sets of scores has been tested to demonstrate the consistence of these two methods. A comparative study of the above five algorithms has revealed that: (1 ISA is the most effective one among the five algorithms on the dataset of GDS1620 and BIMAX outperforms the other algorithms on the dataset of pathway. (2 Both ISA and BIMAX are data-dependent. The former one does not work well on the datasets with few genes, while the latter one holds well for the datasets with more conditions. (3 FABIA and QUBIC perform poorly in this study and they may be suitable to large datasets with more genes and

  6. An efficient grid layout algorithm for biological networks utilizing various biological attributes

    Directory of Open Access Journals (Sweden)

    Kato Mitsuru

    2007-03-01

    Full Text Available Abstract Background Clearly visualized biopathways provide a great help in understanding biological systems. However, manual drawing of large-scale biopathways is time consuming. We proposed a grid layout algorithm that can handle gene-regulatory networks and signal transduction pathways by considering edge-edge crossing, node-edge crossing, distance measure between nodes, and subcellular localization information from Gene Ontology. Consequently, the layout algorithm succeeded in drastically reducing these crossings in the apoptosis model. However, for larger-scale networks, we encountered three problems: (i the initial layout is often very far from any local optimum because nodes are initially placed at random, (ii from a biological viewpoint, human layouts still exceed automatic layouts in understanding because except subcellular localization, it does not fully utilize biological information of pathways, and (iii it employs a local search strategy in which the neighborhood is obtained by moving one node at each step, and automatic layouts suggest that simultaneous movements of multiple nodes are necessary for better layouts, while such extension may face worsening the time complexity. Results We propose a new grid layout algorithm. To address problem (i, we devised a new force-directed algorithm whose output is suitable as the initial layout. For (ii, we considered that an appropriate alignment of nodes having the same biological attribute is one of the most important factors of the comprehension, and we defined a new score function that gives an advantage to such configurations. For solving problem (iii, we developed a search strategy that considers swapping nodes as well as moving a node, while keeping the order of the time complexity. Though a naïve implementation increases by one order, the time complexity, we solved this difficulty by devising a method that caches differences between scores of a layout and its possible updates

  7. Lower bounds to the reliabilities of factor score estimators

    NARCIS (Netherlands)

    Hessen, D.J.

    2017-01-01

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score

  8. Automation of a high risk medication regime algorithm in a home health care population.

    Science.gov (United States)

    Olson, Catherine H; Dierich, Mary; Westra, Bonnie L

    2014-10-01

    Create an automated algorithm for predicting elderly patients' medication-related risks for readmission and validate it by comparing results with a manual analysis of the same patient population. Outcome and Assessment Information Set (OASIS) and medication data were reused from a previous, manual study of 911 patients from 15 Medicare-certified home health care agencies. The medication data was converted into standardized drug codes using APIs managed by the National Library of Medicine (NLM), and then integrated in an automated algorithm that calculates patients' high risk medication regime scores (HRMRs). A comparison of the results between algorithm and manual process was conducted to determine how frequently the HRMR scores were derived which are predictive of readmission. HRMR scores are composed of polypharmacy (number of drugs), Potentially Inappropriate Medications (PIM) (drugs risky to the elderly), and Medication Regimen Complexity Index (MRCI) (complex dose forms, instructions or administration). The algorithm produced polypharmacy, PIM, and MRCI scores that matched with 99%, 87% and 99% of the scores, respectively, from the manual analysis. Imperfect match rates resulted from discrepancies in how drugs were classified and coded by the manual analysis vs. the automated algorithm. HRMR rules lack clarity, resulting in clinical judgments for manual coding that were difficult to replicate in the automated analysis. The high comparison rates for the three measures suggest that an automated clinical tool could use patients' medication records to predict their risks of avoidable readmissions. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. A matched-filter algorithm to detect amperometric spikes resulting from quantal secretion.

    Science.gov (United States)

    Balaji Ramachandran, Supriya; Gillis, Kevin D

    2018-01-01

    Electrochemical microelectrodes located immediately adjacent to the cell surface can detect spikes of amperometric current during exocytosis as the transmitter released from a single vesicle is oxidized on the electrode surface. Automated techniques to detect spikes are needed in order to quantify the spike rate as a measure of the rate of exocytosis. We have developed a Matched Filter (MF) detection algorithm that scans the data set with a library of prototype spike templates while performing a least-squares fit to determine the amplitude and standard error. The ratio of the fit amplitude to the standard error constitutes a criterion score that is assigned for each time point and for each template. A spike is detected when the criterion score exceeds a threshold and the highest-scoring template and the time of peak score is identified. The search for the next spike commences only after the score falls below a second, lower threshold to reduce false positives. The approach was extended to detect spikes with double-exponential decays with the sum of two templates. Receiver Operating Characteristic plots (ROCs) demonstrate that the algorithm detects >95% of manually identified spikes with a false-positive rate of ∼2%. ROCs demonstrate that the MF algorithm performs better than algorithms that detect spikes based on a derivative-threshold approach. The MF approach performs well and leads into approaches to identify spike parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. LOCAL ALGORITHM FOR MONITORING TOTAL SUSPENDED SEDIMENTS IN MICRO-WATERSHEDS USIN DRONES AND REMOTE SENSING APPLICATIONS. CASE STUDY: TEUSACÁ RIVER, LA CALERA, COLOMBIA

    Directory of Open Access Journals (Sweden)

    N. A. Sáenz

    2015-08-01

    Full Text Available An empirical relationship of Total Suspended Sediments (TSS concentrations and reflectance values obtained with Drones’ aerial photos and processed using remote sensing tools was set up as the main objective of this research. A local mathematic algorithm for the micro-watershed of the Teusacá River at La Calera, Colombia, was developed based on the computing of four component of bands from consumed-grade cameras obtaining from each their corresponding reflectance values from procedures for correcting digital camera imagery and using statistical analysis for study the fit and RMSE of 25 regressions. The assessment was characterized by the comparison of reflectance values and 34 in-situ data measurements concentrations between 1.6 and 33 mg L−1 taken from the superficial layer of the river in two campaigns. A large data set of empirical and referenced algorithm from literature were used to evaluate the accuracy and precision of the relationship. For estimation of TSS, a higher accuracy was achieved using the Tassan’s algorithm with the BAND X/ BANDX ratio. The correlation coefficient with R2 = X demonstrate the feasibility of use remote sensed data with consumed-grade cameras as an effective tool for a frequent monitoring and controlling of water quality parameters such as Total Suspended Solids of watersheds, these being the most vulnerable and less compliance with environmental regulations.

  11. Performance evaluation of an automated single-channel sleep–wake detection algorithm

    Directory of Open Access Journals (Sweden)

    Kaplan RF

    2014-10-01

    Full Text Available Richard F Kaplan,1 Ying Wang,1 Kenneth A Loparo,1,2 Monica R Kelly,3 Richard R Bootzin3 1General Sleep Corporation, Euclid, OH, USA; 2Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH, USA; 3Department of Psychology, University of Arizona, Tucson, AZ, USA Background: A need exists, from both a clinical and a research standpoint, for objective sleep measurement systems that are both easy to use and can accurately assess sleep and wake. This study evaluates the output of an automated sleep–wake detection algorithm (Z-ALG used in the Zmachine (a portable, single-channel, electroencephalographic [EEG] acquisition and analysis system against laboratory polysomnography (PSG using a consensus of expert visual scorers. Methods: Overnight laboratory PSG studies from 99 subjects (52 females/47 males, 18–60 years, median age 32.7 years, including both normal sleepers and those with a variety of sleep disorders, were assessed. PSG data obtained from the differential mastoids (A1–A2 were assessed by Z-ALG, which determines sleep versus wake every 30 seconds using low-frequency, intermediate-frequency, and high-frequency and time domain EEG features. PSG data were independently scored by two to four certified PSG technologists, using standard Rechtschaffen and Kales guidelines, and these score files were combined on an epoch-by-epoch basis, using a majority voting rule, to generate a single score file per subject to compare against the Z-ALG output. Both epoch-by-epoch and standard sleep indices (eg, total sleep time, sleep efficiency, latency to persistent sleep, and wake after sleep onset were compared between the Z-ALG output and the technologist consensus score files. Results: Overall, the sensitivity and specificity for detecting sleep using the Z-ALG as compared to the technologist consensus are 95.5% and 92.5%, respectively, across all subjects, and the positive predictive value and the

  12. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    Science.gov (United States)

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  14. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  15. TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.

    Science.gov (United States)

    Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud

    2011-09-01

    Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.

  16. A computerized algorithm for arousal detection in healthy adults and patients with Parkinson disease

    DEFF Research Database (Denmark)

    Sørensen, Gertrud Laura; Jennum, Poul; Kempfner, Jacob

    2012-01-01

    arousals from non-rapid eye movement (REM) and REM sleep, independent of the subject's age and disease. The proposed algorithm uses features from EEG, EMG, and the manual sleep stage scoring as input to a feed-forward artificial neural network (ANN). The performance of the algorithm has been assessed using......Arousals occur from all sleep stages and can be identified as abrupt electroencephalogram (EEG) and electromyogram (EMG) changes. Manual scoring of arousals is time consuming with low interscore agreement. The aim of this study was to design an arousal detection algorithm capable of detecting...

  17. Anesthesia Technique and Mortality after Total Hip or Knee Arthroplasty: A Retrospective, Propensity Score-matched Cohort Study.

    Science.gov (United States)

    Perlas, Anahi; Chan, Vincent W S; Beattie, Scott

    2016-10-01

    This propensity score-matched cohort study evaluates the effect of anesthetic technique on a 30-day mortality after total hip or knee arthroplasty. All patients who had hip or knee arthroplasty between January 1, 2003, and December 31, 2014, were evaluated. The principal exposure was spinal versus general anesthesia. The primary outcome was 30-day mortality. Secondary outcomes were (1) perioperative myocardial infarction; (2) a composite of major adverse cardiac events that includes cardiac arrest, myocardial infarction, or newly diagnosed arrhythmia; (3) pulmonary embolism; (4) major blood loss; (5) hospital length of stay; and (6) operating room procedure time. A propensity score-matched-pair analysis was performed using a nonparsimonious logistic regression model of regional anesthetic use. We identified 10,868 patients, of whom 8,553 had spinal anesthesia and 2,315 had general anesthesia. Ninety-two percent (n = 2,135) of the patients who had general anesthesia were matched to similar patients who did not have general anesthesia. In the matched cohort, the 30-day mortality rate was 0.19% (n = 4) in the spinal anesthesia group and 0.8% (n = 17) in the general anesthesia group (risk ratio, 0.42; 95% CI, 0.21 to 0.83; P = 0.0045). Spinal anesthesia was also associated with a shorter hospital length of stay (5.7 vs. 6.6 days; P anesthesia and lower 30-day mortality, as well as a shorter hospital length of stay, after elective joint replacement surgery.

  18. Complex versus Simple Modeling for DIF Detection: When the Intraclass Correlation Coefficient (?) of the Studied Item Is Less Than the ? of the Total Score

    Science.gov (United States)

    Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon

    2014-01-01

    Previous research has demonstrated that differential item functioning (DIF) methods that do not account for multilevel data structure could result in too frequent rejection of the null hypothesis (i.e., no DIF) when the intraclass correlation coefficient (?) of the studied item was the same as the ? of the total score. The current study extended…

  19. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    Science.gov (United States)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  20. A study of association of Oncotype DX recurrence score with DCE-MRI characteristics using multivariate machine learning models.

    Science.gov (United States)

    Saha, Ashirbani; Harowicz, Michael R; Wang, Weiyao; Mazurowski, Maciej A

    2018-05-01

    To determine whether multivariate machine learning models of algorithmically assessed magnetic resonance imaging (MRI) features from breast cancer patients are associated with Oncotype DX (ODX) test recurrence scores. A set of 261 female patients with invasive breast cancer, pre-operative dynamic contrast enhanced magnetic resonance (DCE-MR) images and available ODX score at our institution was identified. A computer algorithm extracted a comprehensive set of 529 features from the DCE-MR images of these patients. The set of patients was divided into a training set and a test set. Using the training set we developed two machine learning-based models to discriminate (1) high ODX scores from intermediate and low ODX scores, and (2) high and intermediate ODX scores from low ODX scores. The performance of these models was evaluated on the independent test set. High against low and intermediate ODX scores were predicted by the multivariate model with AUC 0.77 (95% CI 0.56-0.98, p replacement of ODX with imaging alone.

  1. Assessment of calcium scoring performance in cardiac computed tomography

    International Nuclear Information System (INIS)

    Ulzheimer, Stefan; Kalender, Willi A.

    2003-01-01

    Electron beam tomography (EBT) has been used for cardiac diagnosis and the quantitative assessment of coronary calcium since the late 1980s. The introduction of mechanical multi-slice spiral CT (MSCT) scanners with shorter rotation times opened new possibilities of cardiac imaging with conventional CT scanners. The purpose of this work was to qualitatively and quantitatively evaluate the performance for EBT and MSCT for the task of coronary artery calcium imaging as a function of acquisition protocol, heart rate, spiral reconstruction algorithm (where applicable) and calcium scoring method. A cardiac CT semi-anthropomorphic phantom was designed and manufactured for the investigation of all relevant image quality parameters in cardiac CT. This phantom includes various test objects, some of which can be moved within the anthropomorphic phantom in a manner that mimics realistic heart motion. These tools were used to qualitatively and quantitatively demonstrate the accuracy of coronary calcium imaging using typical protocols for an electron beam (Evolution C-150XP, Imatron, South San Francisco, Calif.) and a 0.5-s four-slice spiral CT scanner (Sensation 4, Siemens, Erlangen, Germany). A special focus was put on the method of quantifying coronary calcium, and three scoring systems were evaluated (Agatston, volume, and mass scoring). Good reproducibility in coronary calcium scoring is always the result of a combination of high temporal and spatial resolution; consequently, thin-slice protocols in combination with retrospective gating on MSCT scanners yielded the best results. The Agatston score was found to be the least reproducible scoring method. The hydroxyapatite mass, being better reproducible and comparable on different scanners and being a physical quantitative measure, appears to be the method of choice for future clinical studies. The hydroxyapatite mass is highly correlated to the Agatston score. The introduced phantoms can be used to quantitatively assess the

  2. Finger Tapping Clinimetric Score Prediction in Parkinson's Disease Using Low-Cost Accelerometers

    Directory of Open Access Journals (Sweden)

    Julien Stamatakis

    2013-01-01

    algorithm were used to identify the most relevant features in the prediction of MDS-UPDRS FT scores, given by 3 specialists in movement disorders (SMDs. The Goodman-Kruskal Gamma index obtained (0.961, depicting the predictive performance of the model, is similar to those obtained between the individual scores given by the SMD (0.870 to 0.970. The automatic prediction of MDS-UPDRS scores using the proposed system may be valuable in clinical trials designed to evaluate and modify motor disability in PD patients.

  3. A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules.

    Science.gov (United States)

    Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos

    Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods.

  4. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    Energy Technology Data Exchange (ETDEWEB)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim; Alite, Fiori; Block, Alec M.; Choi, Mehee; Emami, Bahman; Harkenrider, Matthew M.; Solanki, Abhishek A.; Roeske, John C., E-mail: jroeske@lumc.edu

    2016-11-15

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scale (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.

  5. Risk Factors for the Failure of Spinal Burst Fractures Treated Conservatively According to the Thoracolumbar Injury Classification and Severity Score (TLICS: A Retrospective Cohort Trial.

    Directory of Open Access Journals (Sweden)

    Jieliang Shen

    Full Text Available The management of thoracolumbar (TL burst fractures is still controversial. The thoracolumbar injury classification and severity score (TLICS algorithm is now widely used to guide clinical decision making, however, in clinical practice, we come to realize that TLICS also has its limitations for treating patients with total scores less than 4, for which conservative treatment may not be optimal in all cases.The aim of this study is to identify several risk factors for the failure of conservative treatment of TL burst fractures according to TLICS algorithm.From June 2008 to December 2013, a cohort of 129 patients with T10-l2 TL burst fractures with a TLISC score ≤3 treated non-operatively were identified and included into this retrospective study. Age, sex, pain intensity, interpedicular distance (IPD, canal compromise, loss of vertebral body height and kyphotic angle (KA were selected as potential risk factors and compared between the non-operative success group and the non-operative failure group.One hundred and four patients successfully completed non-operative treatment, the other 25 patients were converted to surgical treatment because of persistent local back pain or progressive neurological deficits during follow-up. Our results showed that age, visual analogue scale (VAS score and IPD, KA were significantly different between the two groups. Furthermore, regression analysis indicated that VAS score and IPD could be considered as significant predictors for the failure of conservative treatment.The recommendation of non-operative treatment for TLICS score ≤3 has limitations in some patients, and VAS score and IPD could be considered as risk factors for the failure of conservative treatment. Thus, conservative treatment should be decided with caution in patients with greater VAS scores or IPD. If non-operative management is decided, a close follow-up is necessary.

  6. Can computer assistance improve the clinical and functional scores in total knee arthroplasty?

    Science.gov (United States)

    Hernández-Vaquero, Daniel; Suarez-Vazquez, Abelardo; Iglesias-Fernandez, Susana

    2011-12-01

    Surgical navigation in TKA facilitates better alignment; however, it is unclear whether improved alignment alters clinical evolution and midterm and long-term complication rates. We determined the alignment differences between patients with standard, manual, jig-based TKAs and patients with navigation-based TKAs, and whether any differences would modify function, implant survival, and/or complications. We retrospectively reviewed 97 patients (100 TKAs) undergoing TKAs for minimal preoperative deformities. Fifty TKAs were performed with an image-free surgical navigation system and the other 50 with a standard technique. We compared femoral angle (FA), tibial angle (TA), and femorotibial angle (FTA) and determined whether any differences altered clinical or functional scores, as measured by the Knee Society Score (KSS), or complications. Seventy-three patients (75 TKAs) had a minimum followup of 8 years (mean, 8.3 years; range, 8-9.1 years). All patients included in the surgical navigation group had a FTA between 177° and 182º. We found no differences in the KSS or implant survival between the two groups and no differences in complication rates, although more complications occurred in the standard technique group (seven compared with two in the surgical navigation group). In the midterm, we found no difference in functional and clinical scores or implant survival between TKAs performed with and without the assistance of a navigation system. Level II, therapeutic study. See the Guidelines online for a complete description of levels of evidence.

  7. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  8. Assessment of the innovative quality of agomelatine through the Innovation Assessment Algorithm

    Directory of Open Access Journals (Sweden)

    Liliana Civalleri

    2012-09-01

    Full Text Available Aim: the aim of this study was to assess the innovative quality of a medicine based on agomelatine, authorized by the European Commission through a centralized procedure on 19th February 2009 and distributed in Italy under the brands Valdoxan® and Thymanax®.Methodology: the degree of innovation of agomelatine was determined through the Innovation Assessment Algorithm (IAA, which considers the innovative quality of a medicine as a combination of multiple properties. The algorithm may be represented as a decision tree, with each branch corresponding to a property connected with innovation and having a fixed numerical value. The sum of these values establishes the degree of innovation of the medicine. The IAA is articulated in two phases: the first assesses the efficacy of the drug based on the clinical trials presented in support of the registration application (IAA-efficacy; the second reconsiders the degree of innovation on the basis of the efficacy and safety data resulting from clinical practice once the drug has been placed on the market (IAA-effectiveness.Results and conclusions: the score obtained for agomelatine was 592.73 in the efficacy phase and 291.3 in the effectiveness phase. The total score for the two phases was 884, which is equivalent to a good degree of innovation for the molecule

  9. The effect of level of feeding, genetic merit, body condition score and age on biological parameters of a mammary gland model.

    Science.gov (United States)

    Bryant, J R; Lopez-Villalobos, N; Holmes, C W; Pryce, J E; Pitman, G D; Davis, S R

    2007-03-01

    An evolutionary algorithm was applied to a mechanistic model of the mammary gland to find the parameter values that minimised the difference between predicted and actual lactation curves of milk yields in New Zealand Jersey cattle managed at different feeding levels. The effect of feeding level, genetic merit, body condition score at parturition and age on total lactation yields of milk, fat and protein, days in milk, live weight and evolutionary algorithm derived mammary gland parameters was then determined using a multiple regression model. The mechanistic model of the mammary gland was able to fit lactation curves that corresponded to actual lactation curves with a high degree of accuracy. The senescence rate of quiescent (inactive) alveoli was highest at the very low feeding level. The active alveoli population at peak lactation was highest at very low feeding levels, but lower nutritional status at this feeding level prevented high milk yields from being achieved. Genetic merit had a significant linear effect on the active alveoli population at peak and mid to late lactation, with higher values in animals, which had higher breeding values for milk yields. A type of genetic merit × feeding level scaling effect was observed for total yields of milk and fat, and total number of alveoli produced from conception until the end of lactation with the benefits of increases in genetic merit being greater at high feeding levels. A genetic merit × age scaling effect was observed for total lactation protein yields. Initial rates of differentiation of progenitor cells declined with age. Production levels of alveoli from conception to the end of lactation were lowest in 5- to 8-year-old animals; however, in these older animals, quiescent alveoli were reactivated more frequently. The active alveoli population at peak lactation and rates of active alveoli proceeding to quiescence were highest in animals of intermediate body condition scores of 4.0 to 5.0. The results

  10. Performance Evaluation of Spectral Clustering Algorithm using Various Clustering Validity Indices

    OpenAIRE

    M. T. Somashekara; D. Manjunatha

    2014-01-01

    In spite of the popularity of spectral clustering algorithm, the evaluation procedures are still in developmental stage. In this article, we have taken benchmarking IRIS dataset for performing comparative study of twelve indices for evaluating spectral clustering algorithm. The results of the spectral clustering technique were also compared with k-mean algorithm. The validity of the indices was also verified with accuracy and (Normalized Mutual Information) NMI score. Spectral clustering algo...

  11. A Family of Algorithms for Computing Consensus about Node State from Network Data

    Science.gov (United States)

    Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which

  12. Degree of contribution (DoC) feature selection algorithm for structural brain MRI volumetric features in depression detection.

    Science.gov (United States)

    Kipli, Kuryati; Kouzani, Abbas Z

    2015-07-01

    Accurate detection of depression at an individual level using structural magnetic resonance imaging (sMRI) remains a challenge. Brain volumetric changes at a structural level appear to have importance in depression biomarkers studies. An automated algorithm is developed to select brain sMRI volumetric features for the detection of depression. A feature selection (FS) algorithm called degree of contribution (DoC) is developed for selection of sMRI volumetric features. This algorithm uses an ensemble approach to determine the degree of contribution in detection of major depressive disorder. The DoC is the score of feature importance used for feature ranking. The algorithm involves four stages: feature ranking, subset generation, subset evaluation, and DoC analysis. The performance of DoC is evaluated on the Duke University Multi-site Imaging Research in the Analysis of Depression sMRI dataset. The dataset consists of 115 brain sMRI scans of 88 healthy controls and 27 depressed subjects. Forty-four sMRI volumetric features are used in the evaluation. The DoC score of forty-four features was determined as the accuracy threshold (Acc_Thresh) was varied. The DoC performance was compared with that of four existing FS algorithms. At all defined Acc_Threshs, DoC outperformed the four examined FS algorithms for the average classification score and the maximum classification score. DoC has a good ability to generate reduced-size subsets of important features that could yield high classification accuracy. Based on the DoC score, the most discriminant volumetric features are those from the left-brain region.

  13. An algorithm for total variation regularized photoacoustic imaging

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Görner, Torsten; Kunis, Stefan

    2014-01-01

    Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During the iter......Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During...... the iteration, each matrix vector multiplication is realized in an efficient way using a recently proposed spectral discretization of the spherical mean value operator. All theoretical results are illustrated by numerical experiments....

  14. Test-retest reliability at the item level and total score level of the Norwegian version of the Spinal Cord Injury Falls Concern Scale (SCI-FCS).

    Science.gov (United States)

    Roaldsen, Kirsti Skavberg; Måøy, Åsa Blad; Jørgensen, Vivien; Stanghelle, Johan Kvalvik

    2016-05-01

    Translation of the Spinal Cord Injury Falls Concern Scale (SCI-FCS), and investigation of test-retest reliability on item-level and total-score-level. Translation, adaptation and test-retest study. A specialized rehabilitation setting in Norway. Fifty-four wheelchair users with a spinal cord injury. The median age of the cohort was 49 years, and the median number of years after injury was 13. Interventions/measurements: The SCI-FCS was translated and back-translated according to guidelines. Individuals answered the SCI-FCS twice over the course of one week. We investigated item-level test-retest reliability using Svensson's rank-based statistical method for disagreement analysis of paired ordinal data. For relative reliability, we analyzed the total-score-level test-retest reliability with intraclass correlation coefficients (ICC2.1), the standard error of measurement (SEM), and the smallest detectable change (SDC) for absolute reliability/measurement-error assessment and Cronbach's alpha for internal consistency. All items showed satisfactory percentage agreement (≥69%) between test and retest. There were small but non-negligible systematic disagreements among three items; we recovered an 11-13% higher chance for a lower second score. There was no disagreement due to random variance. The test-retest agreement (ICC2.1) was excellent (0.83). The SEM was 2.6 (12%), and the SDC was 7.1 (32%). The Cronbach's alpha was high (0.88). The Norwegian SCI-FCS is highly reliable for wheelchair users with chronic spinal cord injuries.

  15. Information filtering via weighted heat conduction algorithm

    Science.gov (United States)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  16. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  17. A Motion Estimation Algorithm Using DTCWT and ARPS

    Directory of Open Access Journals (Sweden)

    Unan Y. Oktiawati

    2013-09-01

    Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

  18. Dietary Screener Questionnaire in the NHIS CCS 2010: Data Processing and Scoring Procedures

    Science.gov (United States)

    Our NCI research team followed several steps to formulate the Dietary Screener Questionnaire (DSQ) scoring algorithms. These steps are described for researchers who may be interested in the methodologic process our team used.

  19. Dietary Screener Questionnaire in the NHIS CCS 2015: Data Processing and Scoring Procedures

    Science.gov (United States)

    Our NCI research team followed several steps to formulate the Dietary Screener Questionnaire (DSQ) scoring algorithms. These steps are described for researchers who may be interested in the methodologic process our team used.

  20. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Using wound care algorithms: a content validation study.

    Science.gov (United States)

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  2. Scoring relevancy of features based on combinatorial analysis of Lasso with application to lymphoma diagnosis

    Directory of Open Access Journals (Sweden)

    Zare Habil

    2013-01-01

    Full Text Available Abstract One challenge in applying bioinformatic tools to clinical or biological data is high number of features that might be provided to the learning algorithm without any prior knowledge on which ones should be used. In such applications, the number of features can drastically exceed the number of training instances which is often limited by the number of available samples for the study. The Lasso is one of many regularization methods that have been developed to prevent overfitting and improve prediction performance in high-dimensional settings. In this paper, we propose a novel algorithm for feature selection based on the Lasso and our hypothesis is that defining a scoring scheme that measures the "quality" of each feature can provide a more robust feature selection method. Our approach is to generate several samples from the training data by bootstrapping, determine the best relevance-ordering of the features for each sample, and finally combine these relevance-orderings to select highly relevant features. In addition to the theoretical analysis of our feature scoring scheme, we provided empirical evaluations on six real datasets from different fields to confirm the superiority of our method in exploratory data analysis and prediction performance. For example, we applied FeaLect, our feature scoring algorithm, to a lymphoma dataset, and according to a human expert, our method led to selecting more meaningful features than those commonly used in the clinics. This case study built a basis for discovering interesting new criteria for lymphoma diagnosis. Furthermore, to facilitate the use of our algorithm in other applications, the source code that implements our algorithm was released as FeaLect, a documented R package in CRAN.

  3. Development and Validation of a Diabetic Retinopathy Referral Algorithm Based on Single-Field Fundus Photography.

    Directory of Open Access Journals (Sweden)

    Sangeetha Srinivasan

    Full Text Available To develop a simplified algorithm to identify and refer diabetic retinopathy (DR from single-field retinal images specifically for sight-threatening diabetic retinopathy for appropriate care (ii to determine the agreement and diagnostic accuracy of the algorithm as a pilot study among optometrists versus "gold standard" (retinal specialist grading.The severity of DR was scored based on colour photo using a colour coded algorithm, which included the lesions of DR and number of quadrants involved. A total of 99 participants underwent training followed by evaluation. Data of the 99 participants were analyzed. Fifty posterior pole 45 degree retinal images with all stages of DR were presented. Kappa scores (κ, areas under the receiver operating characteristic curves (AUCs, sensitivity and specificity were determined, with further comparison between working optometrists and optometry students.Mean age of the participants was 22 years (range: 19-43 years, 87% being women. Participants correctly identified 91.5% images that required immediate referral (κ = 0.696, 62.5% of images as requiring review after 6 months (κ = 0.462, and 51.2% of those requiring review after 1 year (κ = 0.532. The sensitivity and specificity of the optometrists were 91% and 78% for immediate referral, 62% and 84% for review after 6 months, and 51% and 95% for review after 1 year, respectively. The AUC was the highest (0.855 for immediate referral, second highest (0.824 for review after 1 year, and 0.727 for review after 6 months criteria. Optometry students performed better than the working optometrists for all grades of referral.The diabetic retinopathy algorithm assessed in this work is a simple and a fairly accurate method for appropriate referral based on single-field 45 degree posterior pole retinal images.

  4. Development of a Simple Clinical Risk Score for Early Prediction of Severe Dengue in Adult Patients.

    Directory of Open Access Journals (Sweden)

    Ing-Kit Lee

    , irrespective of the day of illness onset, suggesting that our simple risk score can be easily implemented in resource-limited countries for early prediction of dengue patients at risk of SD provided that they have rapid dengue confirmed tests. For patients with other acute febrile illnesses or bacterial infections usually have SD risk score of >1. Thus, these scoring algorithms cannot totally replace good clinical judgement of the physician, and most importantly, early differentiating dengue from other febrile illnesses is critical for appropriate monitoring and management.

  5. Clinical score and arterial oxygen saturation in children with wheezing associated respiratory illness (WARI).

    Science.gov (United States)

    Sritippayawan, S; Deerojanawong, J; Prapphal, N

    2000-10-01

    To determine the correlation between clinical score (based on respiratory rate, chest wall retractions, air entry, wheezing, consciousness and audible wheezing) and arterial oxygen saturation (SaO2: measured by pulse oximetry) as well as the most appropriate total score for predicting hypoxemia (SaO2 WARI). 70 children (1 month-5 years old) hospitalized in the Department of Pediatrics, Chulalongkorn Hospital with the diagnosis of WARI from January 1, 1996 to December 31, 1996 were studied. Half of them were diagnosed to have acute lower respiratory tract infection (LRI) with wheezing while the remainder had reactive airway disease (RAD). Cross sectional, analytical study. In each group of patients, the clinical score and SaO2 were assessed by the same pediatrician throughout the study. The correlation between the clinical signs and SaO2 as well as the cut off point of total score for predicting hypoxemia were analyzed. The sensitivity, specificity and accuracy of that total score in predicting hypoxemia were also calculated. In both groups of patients (acute LRI with wheezing and RAD group), the clinical signs correlated with SaO2 were wheezing (rs = -0.67 and -0.47 respectively) and chest wall retractions (rs = -0.57 and -0.59 respectively). Total score was also correlated with SaO2 (rs = -0.68 and -0.5 respectively). The cut off point of total score in predicting hypoxemia was 4 providing 80 per cent sensitivity in both groups with accuracy 74.3 per cent and 80 per cent respectively. This clinical score may be used to assess the severity of hypoxemia in WARI patients. Wheezing, chest wall retractions and total score correlated well with SaO2. The total score > 4 was most appropriate in predicting hypoxemia in both children with RAD and wheezing associated with LRI.

  6. Diagnosis of bacterial vaginosis in a rural setup: comparison of clinical algorithm, smear scoring and culture by semiquantitative technique.

    Science.gov (United States)

    Rao, P S; Devi, S; Shriyan, A; Rajaram, M; Jagdishchandra, K

    2004-01-01

    This study was undertaken to estimate the prevalence of bacterial vaginosis (BV) and other sexually transmitted infections (STIs) in a rural set up and compare the smear scoring system to that of culture by semiquantitative technique. A total of 505 married women, who were in sexually active age group of 15-44 years, were selected from three different villages. High vaginal swabs, endocervical swabs, vaginal discharge and blood were collected and processed in the laboratory. Overall prevalence of 29% reproductive tract infection was detected. Endogenous infection was commonly observed (27.92%), and very low prevalence of STIs (Trichomonas 1.18%, Syphilis 0%, Gonorrhea 0%) was detected. Diagnosis of BV was possible in 104 (20.5%) women by smear alone and 88 (17.42%) women by semiquantitative culture.

  7. Quantification of the myocardial area at risk using coronary CT angiography and Voronoi algorithm-based myocardial segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kurata, Akira; Kono, Atsushi; Coenen, Adriaan; Saru-Chelu, Raluca G.; Krestin, Gabriel P. [Erasmus University Medical Center, Department of Radiology, Rotterdam (Netherlands); Sakamoto, Tsuyoshi [AZE inc, Development Division, Chiyoda, Tokyo (Japan); Kido, Teruhito; Mochizuki, Teruhito [Ehime University Graduate School of Medicine, Department of Radiology, Toon, Ehime (Japan); Higashino, Hiroshi [Yotsuba Circulation Clinic, Department of Radiology, Matsuyama, Ehime (Japan); Abe, Mitsunori [Yotsuba Circulation Clinic, Department of Cardiology, Matsuyama, Ehime (Japan); Feyter, Pim J. de; Nieman, Koen [Erasmus University Medical Center, Department of Radiology, Rotterdam (Netherlands); Erasmus University Medical Center, Department of Cardiology, Rotterdam (Netherlands)

    2015-01-15

    The purpose of this study was to estimate the myocardial area at risk (MAAR) using coronary computed tomography angiography (CTA) and Voronoi algorithm-based myocardial segmentation in comparison with single-photon emission computed tomography (SPECT). Thirty-four patients with coronary artery disease underwent 128-slice coronary CTA, stress/rest thallium-201 SPECT, and coronary angiography (CAG). CTA-based MAAR was defined as the sum of all CAG stenosis (>50 %) related territories (the ratio of the left ventricular volume). Using automated quantification software (17-segment model, 5-point scale), SPECT-based MAAR was defined as the number of segments with a score above zero as compared to the total 17 segments by summed stress score (SSS), difference (SDS) score map, and comprehensive SPECT interpretation with either SSS or SDS best correlating CAG findings (SSS/SDS). Results were compared using Pearson's correlation coefficient. Forty-nine stenoses were observed in 102 major coronary territories. Mean value of CTA-based MAAR was 28.3 ± 14.0 %. SSS-based, SDS-based, and SSS/SDS-based MAAR was 30.1 ± 6.1 %, 20.1 ± 15.8 %, and 26.8 ± 15.7 %, respectively. CTA-based MAAR was significantly related to SPECT-based MAAR (r = 0.531 for SSS; r = 0.494 for SDS; r = 0.814 for SSS/SDS; P < 0.05 in each). CTA-based Voronoi algorithm myocardial segmentation reliably quantifies SPECT-based MAAR. (orig.)

  8. Quantification of the myocardial area at risk using coronary CT angiography and Voronoi algorithm-based myocardial segmentation

    International Nuclear Information System (INIS)

    Kurata, Akira; Kono, Atsushi; Coenen, Adriaan; Saru-Chelu, Raluca G.; Krestin, Gabriel P.; Sakamoto, Tsuyoshi; Kido, Teruhito; Mochizuki, Teruhito; Higashino, Hiroshi; Abe, Mitsunori; Feyter, Pim J. de; Nieman, Koen

    2015-01-01

    The purpose of this study was to estimate the myocardial area at risk (MAAR) using coronary computed tomography angiography (CTA) and Voronoi algorithm-based myocardial segmentation in comparison with single-photon emission computed tomography (SPECT). Thirty-four patients with coronary artery disease underwent 128-slice coronary CTA, stress/rest thallium-201 SPECT, and coronary angiography (CAG). CTA-based MAAR was defined as the sum of all CAG stenosis (>50 %) related territories (the ratio of the left ventricular volume). Using automated quantification software (17-segment model, 5-point scale), SPECT-based MAAR was defined as the number of segments with a score above zero as compared to the total 17 segments by summed stress score (SSS), difference (SDS) score map, and comprehensive SPECT interpretation with either SSS or SDS best correlating CAG findings (SSS/SDS). Results were compared using Pearson's correlation coefficient. Forty-nine stenoses were observed in 102 major coronary territories. Mean value of CTA-based MAAR was 28.3 ± 14.0 %. SSS-based, SDS-based, and SSS/SDS-based MAAR was 30.1 ± 6.1 %, 20.1 ± 15.8 %, and 26.8 ± 15.7 %, respectively. CTA-based MAAR was significantly related to SPECT-based MAAR (r = 0.531 for SSS; r = 0.494 for SDS; r = 0.814 for SSS/SDS; P < 0.05 in each). CTA-based Voronoi algorithm myocardial segmentation reliably quantifies SPECT-based MAAR. (orig.)

  9. Dynamic programming algorithms for biological sequence comparison.

    Science.gov (United States)

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  10. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    Science.gov (United States)

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Improved collaborative filtering recommendation algorithm of similarity measure

    Science.gov (United States)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  12. Comparing the Scoring of Human Decomposition from Digital Images to Scoring Using On-site Observations.

    Science.gov (United States)

    Dabbs, Gretchen R; Bytheway, Joan A; Connor, Melissa

    2017-09-01

    When in forensic casework or empirical research in-person assessment of human decomposition is not possible, the sensible substitution is color photographic images. To date, no research has confirmed the utility of color photographic images as a proxy for in situ observation of the level of decomposition. Sixteen observers scored photographs of 13 human cadavers in varying decomposition stages (PMI 2-186 days) using the Total Body Score system (total n = 929 observations). The on-site TBS was compared with recorded observations from digital color images using a paired samples t-test. The average difference between on-site and photographic observations was -0.20 (t = -1.679, df = 928, p = 0.094). Individually, only two observers, both students with human decomposition based on digital images can be substituted for assessments based on observation of the corpse in situ, when necessary. © 2017 American Academy of Forensic Sciences.

  13. Clinical use of the ABO-Scoring Index: reliability and subtraction frequency.

    Science.gov (United States)

    Lieber, William S; Carlson, Sean K; Baumrind, Sheldon; Poulton, Donald R

    2003-10-01

    This study tested the reliability and subtraction frequency of the study model-scoring system of the American Board of Orthodontists (ABO). We used a sample of 36 posttreatment study models that were selected randomly from six different orthodontic offices. Intrajudge and interjudge reliability was calculated using nonparametric statistics (Spearman rank coefficient, Wilcoxon, Kruskal-Wallis, and Mann-Whitney tests). We found differences ranging from 3 to 6 subtraction points (total score) for intrajudge scoring between two sessions. For overall total ABO score, the average correlation was .77. Intrajudge correlation was greatest for occlusal relationships and least for interproximal contacts. Interjudge correlation for ABO score averaged r = .85. Correlation was greatest for buccolingual inclination and least for overjet. The data show that some judges, on average, were much more lenient than others and that this resulted in a range of total scores between 19.7 and 27.5. Most of the deductions were found in the buccal segments and most were related to the second molars. We present these findings in the context of clinicians preparing for the ABO phase III examination and for orthodontists in their ongoing evaluation of clinical results.

  14. Outcomes of Total Knee Arthroplasty in Patients With Poliomyelitis.

    Science.gov (United States)

    Gan, Zhi-Wei Jonathan; Pang, Hee Nee

    2016-11-01

    We report our experience with outcomes of poliomyelitis in the Asian population. Sixteen total knee replacements in 14 patients with polio-affected knees were followed up for at least 18 months. Follow-up assessment included scoring with the American Knee Society Score (AKSS), Oxford knee score, and Short Form 36 Health Survey scores. The mean AKSS improved from 25.59 preoperatively to 82.94 at 24 months, with greater improvement in the knee score. The mean Oxford knee score improved from 40.82 preoperatively to 20.53 at 24 months. The mean AKSS pain score rose from 2.35 to 47.66 at 24 months. The Short Form 36 Health Survey physical functioning and bodily pain scores improved for all patients. Primary total knee arthroplasty of poliomyelitis-affected limbs shows good outcomes, improving quality of life, and decreasing pain. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Sounds unheard of evolutionary algorithms as creative tools for the contemporary composer

    DEFF Research Database (Denmark)

    Dahlstedt, Palle

    2004-01-01

    Evolutionary algorithms are studied as tools for generating novel musical material in the form of musical scores and synthesized sounds. The choice of genetic representation defines a space of potential music. This space is explored using evolutionary algorithms, in search of useful musical mater...... composed with the tools described in the thesis are presented....

  16. Metal artifact reduction image reconstruction algorithm for CT of implanted metal orthopedic devices: a work in progress

    International Nuclear Information System (INIS)

    Liu, Patrick T.; Pavlicek, William P.; Peter, Mary B.; Roberts, Catherine C.; Paden, Robert G.; Spangehl, Mark J.

    2009-01-01

    Despite recent advances in CT technology, metal orthopedic implants continue to cause significant artifacts on many CT exams, often obscuring diagnostic information. We performed this prospective study to evaluate the effectiveness of an experimental metal artifact reduction (MAR) image reconstruction program for CT. We examined image quality on CT exams performed in patients with hip arthroplasties as well as other types of implanted metal orthopedic devices. The exam raw data were reconstructed using two different methods, the standard filtered backprojection (FBP) program and the MAR program. Images were evaluated for quality of the metal-cement-bone interfaces, trabeculae ≤1 cm from the metal, trabeculae 5 cm apart from the metal, streak artifact, and overall soft tissue detail. The Wilcoxon Rank Sum test was used to compare the image scores from the large and small prostheses. Interobserver agreement was calculated. When all patients were grouped together, the MAR images showed mild to moderate improvement over the FBP images. However, when the cases were divided by implant size, the MAR images consistently received higher image quality scores than the FBP images for large metal implants (total hip prostheses). For small metal implants (screws, plates, staples), conversely, the MAR images received lower image quality scores than the FBP images due to blurring artifact. The difference of image scores for the large and small implants was significant (p=0.002). Interobserver agreement was found to be high for all measures of image quality (k>0.9). The experimental MAR reconstruction algorithm significantly improved CT image quality for patients with large metal implants. However, the MAR algorithm introduced blurring artifact that reduced image quality with small metal implants. (orig.)

  17. Lower Bounds to the Reliabilities of Factor Score Estimators.

    Science.gov (United States)

    Hessen, David J

    2016-10-06

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.

  18. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  19. Enhancing Accuracy of Sediment Total Load Prediction Using Evolutionary Algorithms (Case Study: Gotoorchay River

    Directory of Open Access Journals (Sweden)

    K. Roshangar

    2016-09-01

    Full Text Available Introduction: Exact prediction of transported sediment rate by rivers in water resources projects is of utmost importance. Basically erosion and sediment transport process is one of the most complexes hydrodynamic. Although different studies have been developed on the application of intelligent models based on neural, they are not widely used because of lacking explicitness and complexity governing on choosing and architecting of proper network. In this study, a Genetic expression programming model (as an important branches of evolutionary algorithems for predicting of sediment load is selected and investigated as an intelligent approach along with other known classical and imperical methods such as Larsen´s equation, Engelund-Hansen´s equation and Bagnold´s equation. Materials and Methods: In this study, in order to improve explicit prediction of sediment load of Gotoorchay, located in Aras catchment, Northwestern Iran latitude: 38°24´33.3˝ and longitude: 44°46´13.2˝, genetic programming (GP and Genetic Algorithm (GA were applied. Moreover, the semi-empirical models for predicting of total sediment load and rating curve have been used. Finally all the methods were compared and the best ones were introduced. Two statistical measures were used to compare the performance of the different models, namely root mean square error (RMSE and determination coefficient (DC. RMSE and DC indicate the discrepancy between the observed and computed values. Results and Discussions: The statistical characteristics results obtained from the analysis of genetic programming method for both selected model groups indicated that the model 4 including the only discharge of the river, relative to other studied models had the highest DC and the least RMSE in the testing stage (DC= 0.907, RMSE= 0.067. Although there were several parameters applied in other models, these models were complicated and had weak results of prediction. Our results showed that the model 9

  20. Diagnostic performance of line-immunoassay based algorithms for incident HIV-1 infection

    Directory of Open Access Journals (Sweden)

    Schüpbach Jörg

    2012-04-01

    Full Text Available Abstract Background Serologic testing algorithms for recent HIV seroconversion (STARHS provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident ( Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and

  1. Modifications of the National Early Warning Score for patients with chronic respiratory disease

    DEFF Research Database (Denmark)

    Pedersen, N. E.; Rasmussen, L. S.; Petersen, J. A.

    2018-01-01

    System (CROS), the Chronic Respiratory Early Warning Score (CREWS) and the Salford NEWS (S-NEWS) affected NEWS total scores and NEWS performance. METHODS: In an observational study, we included patients with chronic respiratory disease. The frequency of use of CROS and the NEWS total score changes caused...... and specialist consultation' total score intervals to lower intervals. CONCLUSION: Capital Region of Denmark NEWS Override System was frequently used in patients with chronic respiratory disease. CROS, CREWS and S-NEWS reduced sensitivity for 48-h mortality and ICU admission. Using the methodology prevalent......BACKGROUND: The National Early Warning Score (NEWS) uses physiological variables to detect deterioration in hospitalized patients. However, patients with chronic respiratory disease may have abnormal variables not requiring interventions. We studied how the Capital Region of Denmark NEWS Override...

  2. Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation

    Directory of Open Access Journals (Sweden)

    Suk-Ju Kang

    2016-12-01

    Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.

  3. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  4. Vehicle routing problem with time windows using natural inspired algorithms

    Science.gov (United States)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  5. Reliability of Modern Scores to Predict Long-Term Mortality After Isolated Aortic Valve Operations.

    Science.gov (United States)

    Barili, Fabio; Pacini, Davide; D'Ovidio, Mariangela; Ventura, Martina; Alamanni, Francesco; Di Bartolomeo, Roberto; Grossi, Claudio; Davoli, Marina; Fusco, Danilo; Perucci, Carlo; Parolari, Alessandro

    2016-02-01

    Contemporary scores for estimating perioperative death have been proposed to also predict also long-term death. The aim of the study was to evaluate the performance of the updated European System for Cardiac Operative Risk Evaluation II, The Society of Thoracic Surgeons Predicted Risk of Mortality score, and the Age, Creatinine, Left Ventricular Ejection Fraction score for predicting long-term mortality in a contemporary cohort of isolated aortic valve replacement (AVR). We also sought to develop for each score a simple algorithm based on predicted perioperative risk to predict long-term survival. Complete data on 1,444 patients who underwent isolated AVR in a 7-year period were retrieved from three prospective institutional databases and linked with the Italian Tax Register Information System. Data were evaluated with performance analyses and time-to-event semiparametric regression. Survival was 83.0% ± 1.1% at 5 years and 67.8 ± 1.9% at 8 years. Discrimination and calibration of all three scores both worsened for prediction of death at 1 year and 5 years. Nonetheless, a significant relationship was found between long-term survival and quartiles of scores (p System for Cardiac Operative Risk Evaluation II, 1.34 (95% CI, 1.28 to 1.40) for the Society of Thoracic Surgeons score, and 1.08 (95% CI, 1.06 to 1.10) for the Age, Creatinine, Left Ventricular Ejection Fraction score. The predicted risk generated by European System for Cardiac Operative Risk Evaluation II, The Society of Thoracic Surgeons score, and Age, Creatinine, Left Ventricular Ejection Fraction scores cannot also be considered a direct estimate of the long-term risk for death. Nonetheless, the three scores can be used to derive an estimate of long-term risk of death in patients who undergo isolated AVR with the use of a simple algorithm. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Flexible and efficient genome tiling design with penalized uniqueness score

    Directory of Open Access Journals (Sweden)

    Du Yang

    2012-12-01

    Full Text Available Abstract Background As a powerful tool in whole genome analysis, tiling array has been widely used in the answering of many genomic questions. Now it could also serve as a capture device for the library preparation in the popular high throughput sequencing experiments. Thus, a flexible and efficient tiling array design approach is still needed and could assist in various types and scales of transcriptomic experiment. Results In this paper, we address issues and challenges in designing probes suitable for tiling array applications and targeted sequencing. In particular, we define the penalized uniqueness score, which serves as a controlling criterion to eliminate potential cross-hybridization, and a flexible tiling array design pipeline. Unlike BLAST or simple suffix array based methods, computing and using our uniqueness measurement can be more efficient for large scale design and require less memory. The parameters provided could assist in various types of genomic tiling task. In addition, using both commercial array data and experiment data we show, unlike previously claimed, that palindromic sequence exhibiting relatively lower uniqueness. Conclusions Our proposed penalized uniqueness score could serve as a better indicator for cross hybridization with higher sensitivity and specificity, giving more control of expected array quality. The flexible tiling design algorithm incorporating the penalized uniqueness score was shown to give higher coverage and resolution. The package to calculate the penalized uniqueness score and the described probe selection algorithm are implemented as a Perl program, which is freely available at http://www1.fbn-dummerstorf.de/en/forschung/fbs/fb3/paper/2012-yang-1/OTAD.v1.1.tar.gz.

  7. An Efficient Algorithm for Unconstrained Optimization

    Directory of Open Access Journals (Sweden)

    Sergio Gerardo de-los-Cobos-Silva

    2015-01-01

    Full Text Available This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1 stabilization, (2 breadth-first search, and (3 depth-first search. The proposed algorithm, called PSO-3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.

  8. Diagnosis of bacterial vaginosis in a rural setup: Comparison of clinical algorithm, smear scoring and culture by semiquantitative technique

    Directory of Open Access Journals (Sweden)

    Rao P

    2004-01-01

    Full Text Available This study was undertaken to estimate the prevalence of bacterial vaginosis (BV and other sexually transmitted infections (STIs in a rural set up and compare the smear scoring system to that of culture by semiquantitative technique. A total of 505 married women, who were in sexually active age group of 15-44 years, were selected from three different villages. High vaginal swabs, endocervical swabs, vaginal discharge and blood were collected and processed in the laboratory. Overall prevalence of 29% reproductive tract infection was detected. Endogenous infection was commonly observed (27.92%, and very low prevalence of STIs (Trichomonas 1.18%, Syphilis 0%, Gonorrhea 0% was detected. Diagnosis of BV was possible in 104 (20.5% women by smear alone and 88 (17.42% women by semiquantitative culture.

  9. Hip disability and osteoarthritis outcome score (HOOS)--validity and responsiveness in total hip replacement

    DEFF Research Database (Denmark)

    Nilsdotter, Anna K; Lohmander, L Stefan; Klässbo, Maria

    2003-01-01

    The aim of the study was to evaluate if physical functions usually associated with a younger population were of importance for an older population, and to construct an outcome measure for hip osteoarthritis with improved responsiveness compared to the Western Ontario McMaster osteoarthritis score...

  10. An Algorithm for Creating Virtual Controls Using Integrated and Harmonized Longitudinal Data.

    Science.gov (United States)

    Hansen, William B; Chen, Shyh-Huei; Saldana, Santiago; Ip, Edward H

    2018-06-01

    We introduce a strategy for creating virtual control groups-cases generated through computer algorithms that, when aggregated, may serve as experimental comparators where live controls are difficult to recruit, such as when programs are widely disseminated and randomization is not feasible. We integrated and harmonized data from eight archived longitudinal adolescent-focused data sets spanning the decades from 1980 to 2010. Collectively, these studies examined numerous psychosocial variables and assessed past 30-day alcohol, cigarette, and marijuana use. Additional treatment and control group data from two archived randomized control trials were used to test the virtual control algorithm. Both randomized controlled trials (RCTs) assessed intentions, normative beliefs, and values as well as past 30-day alcohol, cigarette, and marijuana use. We developed an algorithm that used percentile scores from the integrated data set to create age- and gender-specific latent psychosocial scores. The algorithm matched treatment case observed psychosocial scores at pretest to create a virtual control case that figuratively "matured" based on age-related changes, holding the virtual case's percentile constant. Virtual controls matched treatment case occurrence, eliminating differential attrition as a threat to validity. Virtual case substance use was estimated from the virtual case's latent psychosocial score using logistic regression coefficients derived from analyzing the treatment group. Averaging across virtual cases created group estimates of prevalence. Two criteria were established to evaluate the adequacy of virtual control cases: (1) virtual control group pretest drug prevalence rates should match those of the treatment group and (2) virtual control group patterns of drug prevalence over time should match live controls. The algorithm successfully matched pretest prevalence for both RCTs. Increases in prevalence were observed, although there were discrepancies between live

  11. The K tree score: quantification of differences in the relative branch length and topology of phylogenetic trees.

    Science.gov (United States)

    Soria-Carrasco, Víctor; Talavera, Gerard; Igea, Javier; Castresana, Jose

    2007-11-01

    We introduce a new phylogenetic comparison method that measures overall differences in the relative branch length and topology of two phylogenetic trees. To do this, the algorithm first scales one of the trees to have a global divergence as similar as possible to the other tree. Then, the branch length distance, which takes differences in topology and branch lengths into account, is applied to the two trees. We thus obtain the minimum branch length distance or K tree score. Two trees with very different relative branch lengths get a high K score whereas two trees that follow a similar among-lineage rate variation get a low score, regardless of the overall rates in both trees. There are several applications of the K tree score, two of which are explained here in more detail. First, this score allows the evaluation of the performance of phylogenetic algorithms, not only with respect to their topological accuracy, but also with respect to the reproduction of a given branch length variation. In a second example, we show how the K score allows the selection of orthologous genes by choosing those that better follow the overall shape of a given reference tree. http://molevol.ibmb.csic.es/Ktreedist.html

  12. A comparative study on assessment procedures and metric properties of two scoring systems of the Coma Recovery Scale-Revised items: standard and modified scores.

    Science.gov (United States)

    Sattin, Davide; Lovaglio, Piergiorgio; Brenna, Greta; Covelli, Venusia; Rossi Sebastiano, Davide; Duran, Dunja; Minati, Ludovico; Giovannetti, Ambra Mara; Rosazza, Cristina; Bersano, Anna; Nigri, Anna; Ferraro, Stefania; Leonardi, Matilde

    2017-09-01

    The study compared the metric characteristics (discriminant capacity and factorial structure) of two different methods for scoring the items of the Coma Recovery Scale-Revised and it analysed scale scores collected using the standard assessment procedure and a new proposed method. Cross sectional design/methodological study. Inpatient, neurological unit. A total of 153 patients with disorders of consciousness were consecutively enrolled between 2011 and 2013. All patients were assessed with the Coma Recovery Scale-Revised using standard (rater 1) and inverted (rater 2) procedures. Coma Recovery Scale-Revised score, number of cognitive and reflex behaviours and diagnosis. Regarding patient assessment, rater 1 using standard and rater 2 using inverted procedures obtained the same best scores for each subscale of the Coma Recovery Scale-Revised for all patients, so no clinical (and statistical) difference was found between the two procedures. In 11 patients (7.7%), rater 2 noted that some Coma Recovery Scale-Revised codified behavioural responses were not found during assessment, although higher response categories were present. A total of 51 (36%) patients presented the same Coma Recovery Scale-Revised scores of 7 or 8 using a standard score, whereas no overlap was found using the modified score. Unidimensionality was confirmed for both score systems. The Coma Recovery Scale Modified Score showed a higher discriminant capacity than the standard score and a monofactorial structure was also supported. The inverted assessment procedure could be a useful evaluation method for the assessment of patients with disorder of consciousness diagnosis.

  13. Fast magnetic resonance imaging based on high degree total variation

    Science.gov (United States)

    Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng

    2018-04-01

    In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.

  14. Greedy algorithms withweights for construction of partial association rules

    KAUST Repository

    Moshkov, Mikhail; Piliszczu, Marcin; Zielosko, Beata Marta

    2009-01-01

    This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.

  15. Greedy algorithms withweights for construction of partial association rules

    KAUST Repository

    Moshkov, Mikhail

    2009-09-10

    This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.

  16. Quantum algorithms for testing Boolean functions

    Directory of Open Access Journals (Sweden)

    Erika Andersson

    2010-06-01

    Full Text Available We discuss quantum algorithms, based on the Bernstein-Vazirani algorithm, for finding which variables a Boolean function depends on. There are 2^n possible linear Boolean functions of n variables; given a linear Boolean function, the Bernstein-Vazirani quantum algorithm can deterministically identify which one of these Boolean functions we are given using just one single function query. The same quantum algorithm can also be used to learn which input variables other types of Boolean functions depend on, with a success probability that depends on the form of the Boolean function that is tested, but does not depend on the total number of input variables. We also outline a procedure to futher amplify the success probability, based on another quantum algorithm, the Grover search.

  17. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  18. Benchmarking homogenization algorithms for monthly data

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  19. Totally opportunistic routing algorithm (TORA) for underwater wireless sensor network

    Science.gov (United States)

    Hashim, Fazirulhisyam; Rasid, Mohd Fadlee A.; Othman, Mohamed

    2018-01-01

    Underwater Wireless Sensor Network (UWSN) has emerged as promising networking techniques to monitor and explore oceans. Research on acoustic communication has been conducted for decades, but had focused mostly on issues related to physical layer such as high latency, low bandwidth, and high bit error. However, data gathering process is still severely limited in UWSN due to channel impairment. One way to improve data collection in UWSN is the design of routing protocol. Opportunistic Routing (OR) is an emerging technique that has the ability to improve the performance of wireless network, notably acoustic network. In this paper, we propose an anycast, geographical and totally opportunistic routing algorithm for UWSN, called TORA. Our proposed scheme is designed to avoid horizontal transmission, reduce end to end delay, overcome the problem of void nodes and maximize throughput and energy efficiency. We use TOA (Time of Arrival) and range based equation to localize nodes recursively within a network. Once nodes are localized, their location coordinates and residual energy are used as a matrix to select the best available forwarder. All data packets may or may not be acknowledged based on the status of sender and receiver. Thus, the number of acknowledgments for a particular data packet may vary from zero to 2-hop. Extensive simulations were performed to evaluate the performance of the proposed scheme for high network traffic load under very sparse and very dense network scenarios. Simulation results show that TORA significantly improves the network performance when compared to some relevant existing routing protocols, such as VBF, HHVBF, VAPR, and H2DAB, for energy consumption, packet delivery ratio, average end-to-end delay, average hop-count and propagation deviation factor. TORA reduces energy consumption by an average of 35% of VBF, 40% of HH-VBF, 15% of VAPR, and 29% of H2DAB, whereas the packet delivery ratio has been improved by an average of 43% of VBF, 26

  20. Totally opportunistic routing algorithm (TORA) for underwater wireless sensor network.

    Science.gov (United States)

    Rahman, Ziaur; Hashim, Fazirulhisyam; Rasid, Mohd Fadlee A; Othman, Mohamed

    2018-01-01

    Underwater Wireless Sensor Network (UWSN) has emerged as promising networking techniques to monitor and explore oceans. Research on acoustic communication has been conducted for decades, but had focused mostly on issues related to physical layer such as high latency, low bandwidth, and high bit error. However, data gathering process is still severely limited in UWSN due to channel impairment. One way to improve data collection in UWSN is the design of routing protocol. Opportunistic Routing (OR) is an emerging technique that has the ability to improve the performance of wireless network, notably acoustic network. In this paper, we propose an anycast, geographical and totally opportunistic routing algorithm for UWSN, called TORA. Our proposed scheme is designed to avoid horizontal transmission, reduce end to end delay, overcome the problem of void nodes and maximize throughput and energy efficiency. We use TOA (Time of Arrival) and range based equation to localize nodes recursively within a network. Once nodes are localized, their location coordinates and residual energy are used as a matrix to select the best available forwarder. All data packets may or may not be acknowledged based on the status of sender and receiver. Thus, the number of acknowledgments for a particular data packet may vary from zero to 2-hop. Extensive simulations were performed to evaluate the performance of the proposed scheme for high network traffic load under very sparse and very dense network scenarios. Simulation results show that TORA significantly improves the network performance when compared to some relevant existing routing protocols, such as VBF, HHVBF, VAPR, and H2DAB, for energy consumption, packet delivery ratio, average end-to-end delay, average hop-count and propagation deviation factor. TORA reduces energy consumption by an average of 35% of VBF, 40% of HH-VBF, 15% of VAPR, and 29% of H2DAB, whereas the packet delivery ratio has been improved by an average of 43% of VBF, 26

  1. NEUTRON ALGORITHM VERIFICATION TESTING

    International Nuclear Information System (INIS)

    COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-01-01

    Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility

  2. Cloud Computing Task Scheduling Based on Cultural Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Li Jian-Wen

    2016-01-01

    Full Text Available The task scheduling strategy based on cultural genetic algorithm(CGA is proposed in order to improve the efficiency of task scheduling in the cloud computing platform, which targets at minimizing the total time and cost of task scheduling. The improved genetic algorithm is used to construct the main population space and knowledge space under cultural framework which get independent parallel evolution, forming a mechanism of mutual promotion to dispatch the cloud task. Simultaneously, in order to prevent the defects of the genetic algorithm which is easy to fall into local optimum, the non-uniform mutation operator is introduced to improve the search performance of the algorithm. The experimental results show that CGA reduces the total time and lowers the cost of the scheduling, which is an effective algorithm for the cloud task scheduling.

  3. Identification of altered pathways in breast cancer based on individualized pathway aberrance score.

    Science.gov (United States)

    Shi, Sheng-Hong; Zhang, Wei; Jiang, Jing; Sun, Long

    2017-08-01

    The objective of the present study was to identify altered pathways in breast cancer based on the individualized pathway aberrance score (iPAS) method combined with the normal reference (nRef). There were 4 steps to identify altered pathways using the iPAS method: Data preprocessing conducted by the robust multi-array average (RMA) algorithm; gene-level statistics based on average Z ; pathway-level statistics according to iPAS; and a significance test dependent on 1 sample Wilcoxon test. The altered pathways were validated by calculating the changed percentage of each pathway in tumor samples and comparing them with pathways from differentially expressed genes (DEGs). A total of 688 altered pathways with Ppathways were involved in the total 688 altered pathways, which may validate the present results. In addition, there were 324 DEGs and 155 common genes between DEGs and pathway genes. DEGs and common genes were enriched in the same 9 significant terms, which also were members of altered pathways. The iPAS method was suitable for identifying altered pathways in breast cancer. Altered pathways (such as KIF and PLK mediated events) were important for understanding breast cancer mechanisms and for the future application of customized therapeutic decisions.

  4. On constructing optimistic simulation algorithms for the discrete event system specification

    International Nuclear Information System (INIS)

    Nutaro, James J.

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models

  5. Effects of aggregation of drug and diagnostic codes on the performance of the high-dimensional propensity score algorithm: an empirical example.

    Science.gov (United States)

    Le, Hoa V; Poole, Charles; Brookhart, M Alan; Schoenbach, Victor J; Beach, Kathleen J; Layton, J Bradley; Stürmer, Til

    2013-11-19

    The High-Dimensional Propensity Score (hd-PS) algorithm can select and adjust for baseline confounders of treatment-outcome associations in pharmacoepidemiologic studies that use healthcare claims data. How hd-PS performance is affected by aggregating medications or medical diagnoses has not been assessed. We evaluated the effects of aggregating medications or diagnoses on hd-PS performance in an empirical example using resampled cohorts with small sample size, rare outcome incidence, or low exposure prevalence. In a cohort study comparing the risk of upper gastrointestinal complications in celecoxib or traditional NSAIDs (diclofenac, ibuprofen) initiators with rheumatoid arthritis and osteoarthritis, we (1) aggregated medications and International Classification of Diseases-9 (ICD-9) diagnoses into hierarchies of the Anatomical Therapeutic Chemical classification (ATC) and the Clinical Classification Software (CCS), respectively, and (2) sampled the full cohort using techniques validated by simulations to create 9,600 samples to compare 16 aggregation scenarios across 50% and 20% samples with varying outcome incidence and exposure prevalence. We applied hd-PS to estimate relative risks (RR) using 5 dimensions, predefined confounders, ≤ 500 hd-PS covariates, and propensity score deciles. For each scenario, we calculated: (1) the geometric mean RR; (2) the difference between the scenario mean ln(RR) and the ln(RR) from published randomized controlled trials (RCT); and (3) the proportional difference in the degree of estimated confounding between that scenario and the base scenario (no aggregation). Compared with the base scenario, aggregations of medications into ATC level 4 alone or in combination with aggregation of diagnoses into CCS level 1 improved the hd-PS confounding adjustment in most scenarios, reducing residual confounding compared with the RCT findings by up to 19%. Aggregation of codes using hierarchical coding systems may improve the performance of

  6. Description and validation of a scoring system for tomosynthesis in pulmonary cystic fibrosis

    Energy Technology Data Exchange (ETDEWEB)

    Vult von Steyern, Kristina; Bjoerkman-Burtscher, Isabella M.; Bozovic, Gracijela; Wiklund, Marie; Geijer, Mats [Skaane University Hospital, Lund University, Centre for Medical Imaging and Physiology, Lund (Sweden); Hoeglund, Peter [Skaane University Hospital, Competence Centre for Clinical Research, Lund (Sweden)

    2012-12-15

    To design and validate a scoring system for tomosynthesis (digital tomography) in pulmonary cystic fibrosis. A scoring system dedicated to tomosynthesis in pulmonary cystic fibrosis was designed. Three radiologists independently scored 88 pairs of radiographs and tomosynthesis examinations of the chest in 60 patients with cystic fibrosis and 7 oncology patients. Radiographs were scored according to the Brasfield scoring system and tomosynthesis examinations were scored using the new scoring system. Observer agreements for the tomosynthesis score were almost perfect for the total score with square-weighted kappa >0.90, and generally substantial to almost perfect for subscores. Correlation between the tomosynthesis score and the Brasfield score was good for the three observers (Kendall's rank correlation tau 0.68, 0.77 and 0.78). Tomosynthesis was generally scored higher as a percentage of the maximum score. Observer agreements for the total score for Brasfield score were almost perfect (square-weighted kappa 0.80, 0.81 and 0.85). The tomosynthesis scoring system seems robust and correlates well with the Brasfield score. Compared with radiography, tomosynthesis is more sensitive to cystic fibrosis changes, especially bronchiectasis and mucus plugging, and the new tomosynthesis scoring system offers the possibility of more detailed and accurate scoring of disease severity. (orig.)

  7. Description and validation of a scoring system for tomosynthesis in pulmonary cystic fibrosis

    International Nuclear Information System (INIS)

    Vult von Steyern, Kristina; Bjoerkman-Burtscher, Isabella M.; Bozovic, Gracijela; Wiklund, Marie; Geijer, Mats; Hoeglund, Peter

    2012-01-01

    To design and validate a scoring system for tomosynthesis (digital tomography) in pulmonary cystic fibrosis. A scoring system dedicated to tomosynthesis in pulmonary cystic fibrosis was designed. Three radiologists independently scored 88 pairs of radiographs and tomosynthesis examinations of the chest in 60 patients with cystic fibrosis and 7 oncology patients. Radiographs were scored according to the Brasfield scoring system and tomosynthesis examinations were scored using the new scoring system. Observer agreements for the tomosynthesis score were almost perfect for the total score with square-weighted kappa >0.90, and generally substantial to almost perfect for subscores. Correlation between the tomosynthesis score and the Brasfield score was good for the three observers (Kendall's rank correlation tau 0.68, 0.77 and 0.78). Tomosynthesis was generally scored higher as a percentage of the maximum score. Observer agreements for the total score for Brasfield score were almost perfect (square-weighted kappa 0.80, 0.81 and 0.85). The tomosynthesis scoring system seems robust and correlates well with the Brasfield score. Compared with radiography, tomosynthesis is more sensitive to cystic fibrosis changes, especially bronchiectasis and mucus plugging, and the new tomosynthesis scoring system offers the possibility of more detailed and accurate scoring of disease severity. (orig.)

  8. Parallel Implementation of the Terrain Masking Algorithm

    Science.gov (United States)

    1994-03-01

    contains behavior rules which can define a computation or an algorithm. It can communicate with other process nodes, it can contain local data, and it can...terrain maskirg calculation is being performed. It is this algorithm that comsumes about seventy percent of the total terrain masking calculation time

  9. A Hybrid Optimization Algorithm for Low RCS Antenna Design

    Directory of Open Access Journals (Sweden)

    W. Shao

    2012-12-01

    Full Text Available In this article, a simple and efficient method is presented to design low radar cross section (RCS patch antennas. This method consists of a hybrid optimization algorithm, which combines a genetic algorithm (GA with tabu search algorithm (TSA, and electromagnetic field solver. The TSA, embedded into the GA frame, defines the acceptable neighborhood region of parameters and screens out the poor-scoring individuals. Thus, the repeats of search are avoided and the amount of time-consuming electromagnetic simulations is largely reduced. Moreover, the whole design procedure is auto-controlled by programming the VBScript language. A slot patch antenna example is provided to verify the accuracy and efficiency of the proposed method.

  10. Edinburgh Cognitive and Behavioural ALS Screen (ECAS)-Italian version: regression based norms and equivalent scores.

    Science.gov (United States)

    Siciliano, Mattia; Trojano, Luigi; Trojsi, Francesca; Greco, Roberta; Santoro, Manuela; Basile, Giuseppe; Piscopo, Fausta; D'Iorio, Alfonsina; Patrone, Manila; Femiano, Cinzia; Monsurrò, Mariarosaria; Tedeschi, Gioacchino; Santangelo, Gabriella

    2017-06-01

    Cognitive assessment for individuals with Amyotrophic Lateral Sclerosis (ALS) can be difficult because of frequent occurrence of difficulties with speech, writing, and drawing. The Edinburgh Cognitive and Behavioural ALS Screen (ECAS) is a recent multi-domain neuropsychological screening tool specifically devised for this purpose, and it assesses the following domains: executive functions, social cognition, verbal fluency and language (ALS-specific), but also memory and visuospatial abilities (Non-ALS specific). ECAS total score ranges from 0 (worst performance) to 136 (best performance). Moreover, a brief caregiver interview provides an assessment of behaviour changes and psychotic symptoms usually associated with ALS patients. The aim of the present study was to provide normative values for ECAS total score and sub-scores in a sample of Italian healthy subjects. Two hundred and seventy-seven Italian healthy subjects (151 women and 126 men; age range 30-79 years; educational level from primary school to university) underwent ECAS and Montreal Cognitive Assessment (MoCA). Multiple linear regression analysis revealed that age and education significantly influenced performance on ECAS total score and sub-scale scores. From the derived linear equation, a correction grid for raw scores was built. Inferential cut-off scores were estimated using a non-parametric technique and equivalent scores (ES) were computed. Correlation analysis showed a good significant correlation between adjusted ECAS total scores with adjusted MoCA total scores (r rho  = 0.669, p < 0.0001). The present study provided normative data for the ECAS in an Italian population useful for both clinical and research purposes.

  11. Multivariate algorithms for initiating event detection and identification in nuclear power plants

    International Nuclear Information System (INIS)

    Wu, Shun-Chi; Chen, Kuang-You; Lin, Ting-Han; Chou, Hwai-Pwu

    2018-01-01

    Highlights: •Multivariate algorithms for NPP initiating event detection and identification. •Recordings from multiple sensors are simultaneously considered for detection. •Both spatial and temporal information is used for event identification. •Untrained event isolation avoids falsely relating an untrained event. •Efficacy of the algorithms is verified with data from the Maanshan NPP simulator. -- Abstract: To prevent escalation of an initiating event into a severe accident, promptly detecting its occurrence and precisely identifying its type are essential. In this study, several multivariate algorithms for initiating event detection and identification are proposed to help maintain safe operations of nuclear power plants (NPPs). By monitoring changes in the NPP sensing variables, an event is detected when the preset thresholds are exceeded. Unlike existing approaches, recordings from sensors of the same type are simultaneously considered for detection, and no subjective reasoning is involved in setting these thresholds. To facilitate efficient event identification, a spatiotemporal feature extractor is proposed. The extracted features consist of the temporal traits used by existing techniques and the spatial signature of an event. Through an F-score-based feature ranking, only those that are most discriminant in classifying the events under consideration will be retained for identification. Moreover, an untrained event isolation scheme is introduced to avoid relating an untrained event to those in the event dataset so that improper recovery actions can be prevented. Results from experiments containing data of 12 event classes and a total of 125 events generated using a Taiwan’s Maanshan NPP simulator are provided to illustrate the efficacy of the proposed algorithms.

  12. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    Science.gov (United States)

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  13. Does present use of cardiovascular medication reflect elevated cardiovascular risk scores estimated ten years ago? A population based longitudinal observational study

    Directory of Open Access Journals (Sweden)

    Straand Jørund

    2011-03-01

    Full Text Available Abstract Background It is desirable that those at highest risk of cardiovascular disease should have priority for preventive measures, eg. treatment with prescription drugs to modify their risk. We wanted to investigate to what extent present use of cardiovascular medication (CVM correlates with cardiovascular risk estimated by three different risk scores (Framingham, SCORE and NORRISK ten years ago. Methods Prospective logitudinal observational study of 20 252 participants in The Hordaland Health Study born 1950-57, not using CVM in 1997-99. Prescription data obtained from The Norwegian Prescription Database in 2008. Results 26% of men and 22% of women aged 51-58 years had started to use some CVM during the previous decade. As a group, persons using CVM scored significantly higher on the risk algorithms Framingham, SCORE and NORRISK compared to those not treated. 16-20% of men and 20-22% of women with risk scores below the high-risk thresholds for the three risk scores were treated with CVM, while 60-65% of men and 25-45% of women with scores above the high-risk thresholds received no treatment. Among women using CVM, only 2.2% (NORRISK, 4.4% (SCORE and 14.5% (Framingham had risk scores above the high-risk values. Low education, poor self-reported general health, muscular pains, mental distress (in females only and a family history of premature cardiovascular disease correlated with use of CVM. Elevated blood pressure was the single factor most strongly predictive of CVM treatment. Conclusion Prescription of CVM to middle-aged individuals by large seems to occur independently of estimated total cardiovascular risk, and this applies especially to females.

  14. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  15. Spinal appearance questionnaire: factor analysis, scoring, reliability, and validity testing.

    Science.gov (United States)

    Carreon, Leah Y; Sanders, James O; Polly, David W; Sucato, Daniel J; Parent, Stefan; Roy-Beaudry, Marjolaine; Hopkins, Jeffrey; McClung, Anna; Bratcher, Kelly R; Diamond, Beverly E

    2011-08-15

    Cross sectional. This study presents the factor analysis of the Spinal Appearance Questionnaire (SAQ) and its psychometric properties. Although the SAQ has been administered to a large sample of patients with adolescent idiopathic scoliosis (AIS) treated surgically, its psychometric properties have not been fully evaluated. This study presents the factor analysis and scoring of the SAQ and evaluates its psychometric properties. The SAQ and the Scoliosis Research Society-22 (SRS-22) were administered to AIS patients who were being observed, braced or scheduled for surgery. Standard demographic data and radiographic measures including Lenke type and curve magnitude were also collected. Of the 1802 patients, 83% were female; with a mean age of 14.8 years and mean initial Cobb angle of 55.8° (range, 0°-123°). From the 32 items of the SAQ, 15 loaded on two factors with consistent and significant correlations across all Lenke types. There is an Appearance (items 1-10) and an Expectations factor (items 12-15). Responses are summed giving a range of 5 to 50 for the Appearance domain and 5 to 20 for the Expectations domain. The Cronbach's α was 0.88 for both domains and Total score with a test-retest reliability of 0.81 for Appearance and 0.91 for Expectations. Correlations with major curve magnitude were higher for the SAQ Appearance and SAQ Total scores compared to correlations between the SRS Appearance and SRS Total scores. The SAQ and SRS-22 Scores were statistically significantly different in patients who were scheduled for surgery compared to those who were observed or braced. The SAQ is a valid measure of self-image in patients with AIS with greater correlation to curve magnitude than SRS Appearance and Total score. It also discriminates between patients who require surgery from those who do not.

  16. Walk Score(TM), Perceived Neighborhood Walkability, and walking in the US.

    Science.gov (United States)

    Tuckel, Peter; Milczarski, William

    2015-03-01

    To investigate both the Walk Score(TM) and a self-reported measure of neighborhood walkability ("Perceived Neighborhood Walkability") as estimators of transport and recreational walking among Americans. The study is based upon a survey of a nationally-representative sample of 1224 American adults. The survey gauged walking for both transport and recreation and included a self-reported measure of neighborhood walkability and each respondent's Walk Score(TM). Binary logistic and linear regression analyses were performed on the data. The Walk Score(TM) is associated with walking for transport, but not recreational walking nor total walking. Perceived Neighborhood Walkability is associated with transport, recreational and total walking. Perceived Neighborhood Walkability captures the experiential nature of walking more than the Walk Score(TM).

  17. An efficient non-dominated sorting method for evolutionary algorithms.

    Science.gov (United States)

    Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F

    2008-01-01

    We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.

  18. Z3 -vertex magic total labeling and Z3 -edge magic total labelingfor the extended duplicate graph of quadrilateral snake

    Science.gov (United States)

    Indira, P.; Selvam, B.; Thirusangu, K.

    2018-04-01

    Based on the works of Kotzig, Rosa and MacDougall et.al., we present algorithms and prove the existence of Z3-vertex magic total labeling and Z3-edge magic total labeling for the extended duplicate graph of quadrilateral snake.

  19. The Impact of EuroSCORE II Risk Factors on Prediction of Long-Term Mortality.

    Science.gov (United States)

    Barili, Fabio; Pacini, Davide; D'Ovidio, Mariangela; Dang, Nicholas C; Alamanni, Francesco; Di Bartolomeo, Roberto; Grossi, Claudio; Davoli, Marina; Fusco, Danilo; Parolari, Alessandro

    2016-10-01

    The European System for Cardiac Operation Risk Evaluation (EuroSCORE) II has not been tested yet for predicting long-term mortality. This study was undertaken to evaluate the relationship between EuroSCORE II and long-term mortality and to develop a new algorithm based on EuroSCORE II factors to predict long-term survival after cardiac surgery. Complete data on 10,033 patients who underwent major cardiac surgery during a 7-year period were retrieved from three prospective institutional databases and linked with the Italian Tax Register Information System. Mortality at follow-up was analyzed with time-to-event analysis. The Kaplan-Meier estimates of survival at 1 and 5 were, respectively, 95.0% ± 0.2% and 84.7% ± 0.4%. Both discrimination and calibration of EuroSCORE II decreased in the prediction of 1-year and 5-year mortality. Nonetheless, EuroSCORE II was confirmed to be an independent predictor of long-term mortality with a nonlinear trend. Several EuroSCORE II variables were independent risk factors for long-term mortality in a regression model, most of all very low ejection fraction (less than 20%), salvage operation, and dialysis. In the final model, isolated mitral valve surgery and isolated coronary artery bypass graft surgery were associated with improved long-term survival. The EuroSCORE II cannot be considered a direct estimator of long-term risk of death, as its performance fades for mortality at follow-up longer than 30 days. Nonetheless, it is nonlinearly associated with long-term mortality, and most of its variables are risk factors for long-term mortality. Hence, they can be used in a different algorithm to stratify the risk of long-term mortality after surgery. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  20. A new algorithm for hip fracture surgery

    DEFF Research Database (Denmark)

    Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim

    2012-01-01

    Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...

  1. An improved recommended algorithm for network structure based on two partial graphs

    Directory of Open Access Journals (Sweden)

    Deng Song

    2017-08-01

    Full Text Available In this thesis,we introduce an improved algorithm based on network structure.Based on the standard material diffusion algorithm,considering the influence of the user's score on the recommendation,the adjustment factor of the initial resource allocation vector and the resource transfer matrix in the recommendation algorithm is improved.Using the practical data set from GroupLens webite to evaluate the performance of the proposed algorithm,we performed a series of experiments.The experimental results reveal that it can yield better recommendation accuracy and has higher hitting rate than collaborative filtering,network-based inference.It can solve the problem of cold start and scalability in the standard material diffusion algorithm.And it also can make the recommendation results diversified.

  2. Peritumoral Artery Scoring System: a Novel Scoring System to Predict Renal Function Outcome after Laparoscopic Partial Nephrectomy.

    Science.gov (United States)

    Zhang, Ruiyun; Wu, Guangyu; Huang, Jiwei; Shi, Oumin; Kong, Wen; Chen, Yonghui; Xu, Jianrong; Xue, Wei; Zhang, Jin; Huang, Yiran

    2017-06-06

    The present study aimed to assess the impact of peritumoral artery characteristics on renal function outcome prediction using a novel Peritumoral Artery Scoring System based on computed tomography arteriography. Peritumoral artery characteristics and renal function were evaluated in 220 patients who underwent laparoscopic partial nephrectomy and then validate in 51 patients with split and total glomerular filtration rate (GFR). In particular, peritumoral artery classification and diameter were measured to assign arteries into low, moderate, and high Peritumoral Artery Scoring System risk categories. Univariable and multivariable logistic regression analyses were then used to determine risk factors for major renal functional decline. The Peritumoral Artery Scoring System and four other nephrometry systems were compared using receiver operating characteristic curve analysis. The Peritumoral Artery Scoring System was significantly superior to the other systems for predicting postoperative renal function decline (p system was a superior independent predictor of estimated glomerular filtration rate (eGFR) decline (area-under-the-curve = 0.865, p renal function outcome after laparoscopic partial nephrectomy.

  3. Predictive values of D-dimer assay, GRACE scores and TIMI scores for adverse outcome in patients with non-ST-segment elevation myocardial infarction

    Science.gov (United States)

    Satilmisoglu, Muhammet Hulusi; Ozyilmaz, Sinem Ozbay; Gul, Mehmet; Ak Yildirim, Hayriye; Kayapinar, Osman; Gokturk, Kadir; Aksu, Huseyin; Erkanli, Korhan; Eksik, Abdurrahman

    2017-01-01

    Purpose To determine the predictive values of D-dimer assay, Global Registry of Acute Coronary Events (GRACE) and Thrombolysis in Myocardial Infarction (TIMI) risk scores for adverse outcome in patients with non-ST-segment elevation myocardial infarction (NSTEMI). Patients and methods A total of 234 patients (mean age: 57.2±11.7 years, 75.2% were males) hospitalized with NSTEMI were included. Data on D-dimer assay, GRACE and TIMI risk scores were recorded. Logistic regression analysis was conducted to determine the risk factors predicting increased mortality. Results Median D-dimer levels were 349.5 (48.0–7,210.0) ng/mL, the average TIMI score was 3.2±1.2 and the GRACE score was 90.4±27.6 with high GRACE scores (>118) in 17.5% of patients. The GRACE score was correlated positively with both the D-dimer assay (r=0.215, P=0.01) and TIMI scores (r=0.504, P=0.000). Multivariate logistic regression analysis revealed that higher creatinine levels (odds ratio =18.465, 95% confidence interval: 1.059–322.084, P=0.046) constituted the only significant predictor of increased mortality risk with no predictive values for age, D-dimer assay, ejection fraction, glucose, hemoglobin A1c, sodium, albumin or total cholesterol levels for mortality. Conclusion Serum creatinine levels constituted the sole independent determinant of mortality risk, with no significant values for D-dimer assay, GRACE or TIMI scores for predicting the risk of mortality in NSTEMI patients. PMID:28408834

  4. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang

    2010-01-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed

  5. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    Science.gov (United States)

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-06-01

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death

  6. An Automatic Assessment System of Diabetic Foot Ulcers Based on Wound Area Determination, Color Segmentation, and Healing Score Evaluation.

    Science.gov (United States)

    Wang, Lei; Pedersen, Peder C; Strong, Diane M; Tulu, Bengisu; Agu, Emmanuel; Ignotz, Ron; He, Qian

    2015-08-07

    For individuals with type 2 diabetes, foot ulcers represent a significant health issue. The aim of this study is to design and evaluate a wound assessment system to help wound clinics assess patients with foot ulcers in a way that complements their current visual examination and manual measurements of their foot ulcers. The physical components of the system consist of an image capture box, a smartphone for wound image capture and a laptop for analyzing the wound image. The wound image assessment algorithms calculate the overall wound area, color segmented wound areas, and a healing score, to provide a quantitative assessment of the wound healing status both for a single wound image and comparisons of subsequent images to an initial wound image. The system was evaluated by assessing foot ulcers for 12 patients in the Wound Clinic at University of Massachusetts Medical School. As performance measures, the Matthews correlation coefficient (MCC) value for the wound area determination algorithm tested on 32 foot ulcer images was .68. The clinical validity of our healing score algorithm relative to the experienced clinicians was measured by Krippendorff's alpha coefficient (KAC) and ranged from .42 to .81. Our system provides a promising real-time method for wound assessment based on image analysis. Clinical comparisons indicate that the optimized mean-shift-based algorithm is well suited for wound area determination. Clinical evaluation of our healing score algorithm shows its potential to provide clinicians with a quantitative method for evaluating wound healing status. © 2015 Diabetes Technology Society.

  7. Categorizing segmentation quality using a quantitative quality assurance algorithm

    International Nuclear Information System (INIS)

    Rodrigues, George; Louie, Alexander; Best, Lara

    2012-01-01

    Obtaining high levels of contouring consistency is a major limiting step in optimizing the radiotherapeutic ratio. We describe a novel quantitative methodology for the quality assurance (QA) of contour compliance referenced against a community set of contouring experts. Two clinical tumour site scenarios (10 lung cases and one prostate case) were used with QA algorithm. For each case, multiple physicians (lung: n = 6, prostate: n = 25) segmented various target/organ at risk (OAR) structures to define a set of community reference contours. For each set of community contours, a consensus contour (Simultaneous Truth and Performance Level Estimation (STAPLE)) was created. Differences between each individual community contour versus the group consensus contour were quantified by consensus-based contouring penalty metric (PM) scores. New observers segmented these same cases to calculate individual PM scores (for each unique target/OAR) for each new observer–STAPLE pair for comparison against the community and consensus contours. Four physicians contoured the 10 lung cases for a total of 72 contours for quality assurance evaluation against the previously derived community consensus contours. A total of 16 outlier contours were identified by the QA system of which 11 outliers were due to over-contouring discrepancies, three were due to over-/under-contouring discrepancies, and two were due to missing/incorrect nodal contours. In the prostate scenario involving six physicians, the QA system detected a missing penile bulb contour, systematic inner-bladder contouring, and under-contouring of the upper/anterior rectum. A practical methodology for QA has been demonstrated with future clinical trial credentialing, medical education and auto-contouring assessment applications.

  8. Continued Inpatient Care After Primary Total Knee Arthroplasty Increases 30-Day Post-Discharge Complications: A Propensity Score-Adjusted Analysis.

    Science.gov (United States)

    McLawhorn, Alexander S; Fu, Michael C; Schairer, William W; Sculco, Peter K; MacLean, Catherine H; Padgett, Douglas E

    2017-09-01

    Discharge destination, either home or skilled care facility, after total knee arthroplasty (TKA) may be associated with significant variation in postacute care outcomes. The purpose of this study was to characterize the 30-day postdischarge outcomes after primary TKA relative to discharge destination. All primary unilateral TKAs performed for osteoarthritis from 2011-2014 were identified in the National Surgical Quality Improvement Program database. Propensity scores based on predischarge characteristics were used to adjust for selection bias in discharge destination. Propensity-adjusted multivariable logistic regressions were used to examine associations between discharge destination and postdischarge complications. Among 101,256 primary TKAs identified, 70,628 were discharged home and 30,628 to skilled care facilities. Patients discharged to facilities were more frequently were female, older, higher body mass index class, higher Charlson comorbidity index and American Society of Anesthesiologists scores, had predischarge complications, received general anesthesia, and classified as nonindependent preoperatively. Propensity adjustment accounted for this selection bias. Patients discharged to skilled care facilities after TKA had higher odds of any major complication (odds ratio = 1.25; 95% confidence interval, 1.13-1.37) and readmission (odds ratio = 1.81; 95% confidence interval, 1.50-2.18). Skilled care was associated with increased odds for respiratory, septic, thromboembolic, and urinary complications. Associations with death, cardiac, and wound complications were not significant. After controlling for predischarge characteristics, discharge to skilled care facilities vs home after primary TKA is associated with higher odds of numerous complications and unplanned readmission. These results support coordination of care pathways to facilitate home discharge after hospitalization for TKA whenever possible. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Development and clinical implementation of an enhanced display algorithm for use in networked electronic portal imaging

    International Nuclear Information System (INIS)

    Heuvel, Frank van den; Han, Ihn; Chungbin, Suzanne; Strowbridge, Amy; Tekyi-Mensah, Sam; Ragan, Don P.

    1999-01-01

    Purpose: To introduce and clinically validate a preprocessing algorithm that allows clinical images from an electronic portal imaging device (EPID) to be displayed on any computer monitor, without loss of clinical usability. The introduction of such a system frees EPI systems from the constraints of fixed viewing workstations and increases mobility of the images in a department. Methods and Materials: The preprocessing algorithm, together with its variable parameters is introduced. Clinically, the algorithm is tested using an observer study of 316 EPID images of the pelvic region in the framework of treatment of carcinoma of the cervix and endometrium. Both anterior-posterior (AP/PA) and latero-lateral (LAT) images were used. The images scored were taken from six different patients, five of whom were obese, female, and postmenopausal. The result is tentatively compared with results from other groups. The scoring system, based on the number of visible landmarks in the port, is proposed and validated. Validation was performed by having the observer panel score images with artificially induced noise levels. A comparative study was undertaken with a standard automatic window and leveling display technique. Finally, some case studies using different image sites and EPI detectors are presented. Results: The image quality for all images in this study was deemed to be clinically useful (mean score > 1). Most of the images received a score which was second highest (AP/PA landmarks ≥ 6 and LAT landmarks ≥ 5). Obesity, which has been an important factor determining the image quality, was not seen to be a factor here. Compared to standard techniques a highly significant improvement was determined with regard to clinical usefulness. The algorithm performs fast (less than 9 seconds) and needs no additional user interaction in most of the cases. The algorithm works well on both direct detection portal imagers and camera-based imagers whether analog or digital cameras

  10. Total Lightning as an Indicator of Mesocyclone Behavior

    Science.gov (United States)

    Stough, Sarah M.; Carey, Lawrence D.; Schultz, Christopher J.

    2014-01-01

    Apparent relationship between total lightning (in-cloud and cloud to ground) and severe weather suggests its operational utility. Goal of fusion of total lightning with proven tools (i.e., radar lightning algorithms. Preliminary work here investigates circulation from Weather Suveilance Radar- 1988 Doppler (WSR-88D) coupled with total lightning data from Lightning Mapping Arrays.

  11. Are the MDS-UPDRS-based composite scores clinically applicable?

    Science.gov (United States)

    Makkos, Attila; Kovács, Márton; Aschermann, Zsuzsanna; Harmat, Márk; Janszky, József; Karádi, Kázmér; Kovács, Norbert

    2018-02-28

    The International Parkinson and Movement Disorder Society-sponsored UPDRS (MDS-UPDRS) is a powerful clinical outcome measure. To evaluate the feasibility of various MDS-UPDRS-based composite scores and determine their minimal clinically important difference threshold values. Overall, 1,113 paired investigations of 452 patients were reviewed implementing three different techniques simultaneously. Based on the ordinal regression modeling, the MDS-UPDRS II+III, MDS-UPDRS I+II+III, and the total score of MDS-UPDRS are clinically applicable outcome measures. Any improvement greater than 4.9 points or any worsening more than 4.2 points on MDS-UPDRS II+III represent a minimal, yet clinically meaningful, change. In reference to MDS-UPDRS I+II+III, the smallest changes considered clinically relevant were 6.7 and 5.2 points for improvement and deterioration, respectively. The thresholds for the total score of MDS-UPDRS were 7.1 points for improvement and 6.3 points for worsening. Our findings support the application of various MDS-UPDRS-based composite scores. © 2018 International Parkinson and Movement Disorder Society. © 2018 International Parkinson and Movement Disorder Society.

  12. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic

  13. Good validity and reliability of the forgotten joint score in evaluating the outcome of total knee arthroplasty

    DEFF Research Database (Denmark)

    Thomsen, Morten G; Latifi, Roshan; Kallemose, Thomas

    2016-01-01

    . We investigated the validity and reliability of the FJS. Patients and methods - A Danish version of the FJS questionnaire was created according to internationally accepted standards. 360 participants who underwent primary TKA were invited to participate in the study. Of these, 315 were included...... in a validity study and 150 in a reliability study. Correlation between the Oxford knee score (OKS) and the FJS was examined and test-retest evaluation was performed. A ceiling effect was defined as participants reaching a score within 15% of the maximum achievable score. Results - The validity study revealed...... of the FJS (ICC? 0.79). We found a high level of internal consistency (Cronbach's? = 0.96). The ceiling effect for the FJS was 16%, as compared to 37% for the OKS. Interpretation - The FJS showed good construct validity and test-retest reliability. It had a lower ceiling effect than the OKS. The FJS appears...

  14. Naive scoring of human sleep based on a hidden Markov model of the electroencephalogram.

    Science.gov (United States)

    Yaghouby, Farid; Modur, Pradeep; Sunderam, Sridhar

    2014-01-01

    Clinical sleep scoring involves tedious visual review of overnight polysomnograms by a human expert. Many attempts have been made to automate the process by training computer algorithms such as support vector machines and hidden Markov models (HMMs) to replicate human scoring. Such supervised classifiers are typically trained on scored data and then validated on scored out-of-sample data. Here we describe a methodology based on HMMs for scoring an overnight sleep recording without the benefit of a trained initial model. The number of states in the data is not known a priori and is optimized using a Bayes information criterion. When tested on a 22-subject database, this unsupervised classifier agreed well with human scores (mean of Cohen's kappa > 0.7). The HMM also outperformed other unsupervised classifiers (Gaussian mixture models, k-means, and linkage trees), that are capable of naive classification but do not model dynamics, by a significant margin (p < 0.05).

  15. Novel Semiquantitative Bone Marrow Oedema Score and Fracture Score for the Magnetic Resonance Imaging Assessment of the Active Charcot Foot in Diabetes

    Science.gov (United States)

    Meacock, L.; Donaldson, Ana; Isaac, A.; Briody, A.; Ramnarine, R.; Edmonds, M. E.; Elias, D. A.

    2017-01-01

    There are no accepted methods to grade bone marrow oedema (BMO) and fracture on magnetic resonance imaging (MRI) scans in Charcot osteoarthropathy. The aim was to devise semiquantitative BMO and fracture scores on foot and ankle MRI scans in diabetic patients with active osteoarthropathy and to assess the agreement in using these scores. Three radiologists assessed 45 scans (Siemens Avanto 1.5T, dedicated foot and ankle coil) and scored independently twenty-two bones (proximal phalanges, medial and lateral sesamoids, metatarsals, tarsals, distal tibial plafond, and medial and lateral malleoli) for BMO (0—no oedema, 1—oedema  50% of bone volume) and fracture (0—no fracture, 1—fracture, and 2—collapse/fragmentation). Interobserver agreement and intraobserver agreement were measured using multilevel modelling and intraclass correlation (ICC). The interobserver agreement for the total BMO and fracture scores was very good (ICC = 0.83, 95% confidence intervals (CI) 0.76, 0.91) and good (ICC = 0.62; 95% CI 0.48, 0.76), respectively. The intraobserver agreement for the total BMO and fracture scores was good (ICC = 0.78, 95% CI 0.6, 0.95) and fair to moderate (ICC = 0.44; 95% CI 0.14, 0.74), respectively. The proposed BMO and fracture scores are reliable and can be used to grade the extent of bone damage in the active Charcot foot. PMID:29230422

  16. Description and validation of a scoring system for tomosynthesis in pulmonary cystic fibrosis.

    Science.gov (United States)

    Vult von Steyern, Kristina; Björkman-Burtscher, Isabella M; Höglund, Peter; Bozovic, Gracijela; Wiklund, Marie; Geijer, Mats

    2012-12-01

    To design and validate a scoring system for tomosynthesis (digital tomography) in pulmonary cystic fibrosis. A scoring system dedicated to tomosynthesis in pulmonary cystic fibrosis was designed. Three radiologists independently scored 88 pairs of radiographs and tomosynthesis examinations of the chest in 60 patients with cystic fibrosis and 7 oncology patients. Radiographs were scored according to the Brasfield scoring system and tomosynthesis examinations were scored using the new scoring system. Observer agreements for the tomosynthesis score were almost perfect for the total score with square-weighted kappa >0.90, and generally substantial to almost perfect for subscores. Correlation between the tomosynthesis score and the Brasfield score was good for the three observers (Kendall's rank correlation tau 0.68, 0.77 and 0.78). Tomosynthesis was generally scored higher as a percentage of the maximum score. Observer agreements for the total score for Brasfield score were almost perfect (square-weighted kappa 0.80, 0.81 and 0.85). The tomosynthesis scoring system seems robust and correlates well with the Brasfield score. Compared with radiography, tomosynthesis is more sensitive to cystic fibrosis changes, especially bronchiectasis and mucus plugging, and the new tomosynthesis scoring system offers the possibility of more detailed and accurate scoring of disease severity. Tomosynthesis is more sensitive than conventional radiography for pulmonary cystic fibrosis changes. The radiation dose from chest tomosynthesis is low compared with computed tomography. Tomosynthesis may become useful in the regular follow-up of patients with cystic fibrosis.

  17. Evaluation of modified Alvarado scoring system and RIPASA scoring system as diagnostic tools of acute appendicitis.

    Science.gov (United States)

    Shuaib, Abdullah; Shuaib, Ali; Fakhra, Zainab; Marafi, Bader; Alsharaf, Khalid; Behbehani, Abdullah

    2017-01-01

    Acute appendicitis is the most common surgical condition presented in emergency departments worldwide. Clinical scoring systems, such as the Alvarado and modified Alvarado scoring systems, were developed with the goal of reducing the negative appendectomy rate to 5%-10%. The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) scoring system was established in 2008 specifically for Asian populations. The aim of this study was to compare the modified Alvarado with the RIPASA scoring system in Kuwait population. This study included 180 patients who underwent appendectomies and were documented as having "acute appendicitis" or "abdominal pain" in the operating theatre logbook (unit B) from November 2014 to March 2016. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), diagnostic accuracy, predicted negative appendectomy and receiver operating characteristic (ROC) curve of the modified Alvarado and RIPASA scoring systems were derived using SPSS statistical software. A total of 136 patients were included in this study according to our criteria. The cut-off threshold point of the modified Alvarado score was set at 7.0, which yielded a sensitivity of 82.8% and a specificity of 56%. The PPV was 89.3% and the NPV was 42.4%. The cut-off threshold point of the RIPASA score was set at 7.5, which yielded a 94.5% sensitivity and an 88% specificity. The PPV was 97.2% and the NPV was 78.5%. The predicted negative appendectomy rates were 10.7% and 2.2% for the modified Alvarado and RIPASA scoring systems, respectively. The negative appendectomy rate decreased significantly, from 18.4% to 10.7% for the modified Alvarado, and to 2.2% for the RIPASA scoring system, which was a significant difference (PAsian populations. It consists of 14 clinical parameters that can be obtained from a good patient history, clinical examination and laboratory investigations. The RIPASA scoring system is more accurate and specific than the modified Alvarado

  18. Quantifying the impact of using Coronary Artery Calcium Score for risk categorization instead of Framingham Score or European Heart SCORE in lipid lowering algorithms in a Middle Eastern population.

    Science.gov (United States)

    Isma'eel, Hussain A; Almedawar, Mohamad M; Harbieh, Bernard; Alajaji, Wissam; Al-Shaar, Laila; Hourani, Mukbil; El-Merhi, Fadi; Alam, Samir; Abchee, Antoine

    2015-10-01

    The use of the Coronary Artery Calcium Score (CACS) for risk categorization instead of the Framingham Risk Score (FRS) or European Heart SCORE (EHS) to improve classification of individuals is well documented. However, the impact of reclassifying individuals using CACS on initiating lipid lowering therapy is not well understood. We aimed to determine the percentage of individuals not requiring lipid lowering therapy as per the FRS and EHS models but are found to require it using CACS and vice versa; and to determine the level of agreement between CACS, FRS and EHS based models. Data was collected for 500 consecutive patients who had already undergone CACS. However, only 242 patients met the inclusion criteria and were included in the analysis. Risk stratification comparisons were conducted according to CACS, FRS, and EHS, and the agreement (Kappa) between them was calculated. In accordance with the models, 79.7% to 81.5% of high-risk individuals were down-classified by CACS, while 6.8% to 7.6% of individuals at intermediate risk were up-classified to high risk by CACS, with slight to moderate agreement. Moreover, CACS recommended treatment to 5.7% and 5.8% of subjects untreated according to European and Canadian guidelines, respectively; whereas 75.2% to 81.2% of those treated in line with the guidelines would not be treated based on CACS. In this simulation, using CACS for risk categorization warrants lipid lowering treatment for 5-6% and spares 70-80% from treatment in accordance with the guidelines. Current strong evidence from double randomized clinical trials is in support of guideline recommendations. Our results call for a prospective trial to explore the benefits/risks of a CACS-based approach before any recommendations can be made.

  19. Statistical distributions of optimal global alignment scores of random protein sequences

    Directory of Open Access Journals (Sweden)

    Tang Jiaowei

    2005-10-01

    Full Text Available Abstract Background The inference of homology from statistically significant sequence similarity is a central issue in sequence alignments. So far the statistical distribution function underlying the optimal global alignments has not been completely determined. Results In this study, random and real but unrelated sequences prepared in six different ways were selected as reference datasets to obtain their respective statistical distributions of global alignment scores. All alignments were carried out with the Needleman-Wunsch algorithm and optimal scores were fitted to the Gumbel, normal and gamma distributions respectively. The three-parameter gamma distribution performs the best as the theoretical distribution function of global alignment scores, as it agrees perfectly well with the distribution of alignment scores. The normal distribution also agrees well with the score distribution frequencies when the shape parameter of the gamma distribution is sufficiently large, for this is the scenario when the normal distribution can be viewed as an approximation of the gamma distribution. Conclusion We have shown that the optimal global alignment scores of random protein sequences fit the three-parameter gamma distribution function. This would be useful for the inference of homology between sequences whose relationship is unknown, through the evaluation of gamma distribution significance between sequences.

  20. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation

    Directory of Open Access Journals (Sweden)

    Gang Wang

    2018-05-01

    Full Text Available As the application of a coal mine Internet of Things (IoT, mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  1. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation.

    Science.gov (United States)

    Wang, Gang; Zhao, Zhikai; Ning, Yongjie

    2018-05-28

    As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  2. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  3. Performance comparison of extracellular spike sorting algorithms for single-channel recordings.

    Science.gov (United States)

    Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert

    2012-01-30

    Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (psorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (palgorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Association between sleep stages and hunger scores in 36 children.

    Science.gov (United States)

    Arun, R; Pina, P; Rubin, D; Erichsen, D

    2016-10-01

    Childhood obesity is a growing health challenge. Recent studies show that children with late bedtime and late awakening are more obese independent of total sleep time. In adolescents and adults, a delayed sleep phase has been associated with higher caloric intake. Furthermore, an adult study showed a positive correlation between REM sleep and energy balance. This relationship has not been demonstrated in children. However, it may be important as a delayed sleep phase would increase the proportion of REM sleep. This study investigated the relationship between hunger score and sleep physiology in a paediatric population. Thirty-six patients referred for a polysomnogram for suspected obstructive sleep apnoea were enrolled in the study. Sleep stages were recorded as part of the polysomnogram. Hunger scores were obtained using a visual analogue scale. Mean age was 9.6 ± 3.5 years. Mean hunger scores were 2.07 ± 2.78. Hunger scores were positively correlated with percentage of total rapid eye movement (REM) sleep (r = 0.438, P hunger score (r = -0.360, P hunger scores. These findings suggest that delayed bedtime, which increases the proportion of REM sleep and decreases the proportion of SWS, results in higher hunger levels in children. © 2015 World Obesity.

  5. Apply lightweight recognition algorithms in optical music recognition

    Science.gov (United States)

    Pham, Viet-Khoi; Nguyen, Hai-Dang; Nguyen-Khac, Tung-Anh; Tran, Minh-Triet

    2015-02-01

    The problems of digitalization and transformation of musical scores into machine-readable format are necessary to be solved since they help people to enjoy music, to learn music, to conserve music sheets, and even to assist music composers. However, the results of existing methods still require improvements for higher accuracy. Therefore, the authors propose lightweight algorithms for Optical Music Recognition to help people to recognize and automatically play musical scores. In our proposal, after removing staff lines and extracting symbols, each music symbol is represented as a grid of identical M ∗ N cells, and the features are extracted and classified with multiple lightweight SVM classifiers. Through experiments, the authors find that the size of 10 ∗ 12 cells yields the highest precision value. Experimental results on the dataset consisting of 4929 music symbols taken from 18 modern music sheets in the Synthetic Score Database show that our proposed method is able to classify printed musical scores with accuracy up to 99.56%.

  6. Randomized clinical trial of Appendicitis Inflammatory Response score-based management of patients with suspected appendicitis.

    Science.gov (United States)

    Andersson, M; Kolodziej, B; Andersson, R E

    2017-10-01

    The role of imaging in the diagnosis of appendicitis is controversial. This prospective interventional study and nested randomized trial analysed the impact of implementing a risk stratification algorithm based on the Appendicitis Inflammatory Response (AIR) score, and compared routine imaging with selective imaging after clinical reassessment. Patients presenting with suspicion of appendicitis between September 2009 and January 2012 from age 10 years were included at 21 emergency surgical centres and from age 5 years at three university paediatric centres. Registration of clinical characteristics, treatments and outcomes started during the baseline period. The AIR score-based algorithm was implemented during the intervention period. Intermediate-risk patients were randomized to routine imaging or selective imaging after clinical reassessment. The baseline period included 1152 patients, and the intervention period 2639, of whom 1068 intermediate-risk patients were randomized. In low-risk patients, use of the AIR score-based algorithm resulted in less imaging (19·2 versus 34·5 per cent; P appendicitis (6·8 versus 9·7 per cent; P = 0·034). Intermediate-risk patients randomized to the imaging and observation groups had the same proportion of negative appendicectomies (6·4 versus 6·7 per cent respectively; P = 0·884), number of admissions, number of perforations and length of hospital stay, but routine imaging was associated with an increased proportion of patients treated for appendicitis (53·4 versus 46·3 per cent; P = 0·020). AIR score-based risk classification can safely reduce the use of diagnostic imaging and hospital admissions in patients with suspicion of appendicitis. Registration number: NCT00971438 ( http://www.clinicaltrials.gov). © 2017 BJS Society Ltd Published by John Wiley & Sons Ltd.

  7. Imaging Total Stations - Modular and Integrated Concepts

    Science.gov (United States)

    Hauth, Stefan; Schlüter, Martin

    2010-05-01

    Keywords: 3D-Metrology, Engineering Geodesy, Digital Image Processing Initialized in 2009, the Institute for Spatial Information and Surveying Technology i3mainz, Mainz University of Applied Sciences, forces research towards modular concepts for imaging total stations. On the one hand, this research is driven by the successful setup of high precision imaging motor theodolites in the near past, on the other hand it is pushed by the actual introduction of integrated imaging total stations to the positioning market by the manufacturers Topcon and Trimble. Modular concepts for imaging total stations are manufacturer independent to a large extent and consist of a particular combination of accessory hardware, software and algorithmic procedures. The hardware part consists mainly of an interchangeable eyepiece adapter offering opportunities for digital imaging and motorized focus control. An easy assembly and disassembly in the field is possible allowing the user to switch between the classical and the imaging use of a robotic total station. The software part primarily has to ensure hardware control, but several level of algorithmic support might be added and have to be distinguished. Algorithmic procedures allow to reach several levels of calibration concerning the geometry of the external digital camera and the total station. We deliver insight in our recent developments and quality characteristics. Both the modular and the integrated approach seem to have its individual strengths and weaknesses. Therefore we expect that both approaches might point at different target applications. Our aim is a better understanding of appropriate applications for robotic imaging total stations. First results are presented. Stefan Hauth, Martin Schlüter i3mainz - Institut für Raumbezogene Informations- und Messtechnik FH Mainz University of Applied Sciences Lucy-Hillebrand-Straße 2, 55128 Mainz, Germany

  8. Duration and frequency of every day activities in total hip patients.

    Science.gov (United States)

    Morlock, M; Schneider, E; Bluhm, A; Vollmer, M; Bergmann, G; Müller, V; Honl, M

    2001-07-01

    Little knowledge about frequency and duration of daily activities in patients after total hip arthroplasty is available. Such information is required for the definition of realistic load collectives for pre-clinical testing of prostheses. It could also be helpful for the quantitative evaluation of surgery outcome with different prosthesis types. The purpose of this study was to develop and apply a system for the determination of frequency and duration of patient activities in their habitual environment and to compare the results to a clinical outcome score (Harris hip score).A portable activity monitoring system (weight 1.6 kg including batteries) was designed using a Palm top computer, 2 inclination sensors for the thigh and calf and one goniometer positioned at the knee joint. An algorithm was developed to identify frequency and duration of the activities lying, sitting, standing, walking and stair climbing from the signals of the 3 sensors. 42 patients participated in the study and were equipped with the system in the morning at their home. Datasets of 31 patients (age 62.5+/-11.5 y) covered more than 6h (9.8 +/- 1.6 h) and were included in the analysis. Prosthesis specific data as well as the Harris hip score were collected. The most frequent patient activity was sitting (44.3% of the time), followed by standing (24.5%), walking (10.2%), lying (5.8%) and stair climbing (0.4%). The median number of steps/stairs was 6048/164. The number of step cycles representing one year in vivo use should, consequently, be increased to 1.1 million. The Harris hip score (91.4 +/- 9.8) correlated significantly with the number of stairs (r(2) = 0.26, p = 0.003) and showed a positive tendency with the number of steps per day. No differences in activity levels between prosthesis specific factors were found.

  9. Automated essay scoring and the future of educational assessment in medical education.

    Science.gov (United States)

    Gierl, Mark J; Latifi, Syed; Lai, Hollis; Boulais, André-Philippe; De Champlain, André

    2014-10-01

    Constructed-response tasks, which range from short-answer tests to essay questions, are included in assessments of medical knowledge because they allow educators to measure students' ability to think, reason, solve complex problems, communicate and collaborate through their use of writing. However, constructed-response tasks are also costly to administer and challenging to score because they rely on human raters. One alternative to the manual scoring process is to integrate computer technology with writing assessment. The process of scoring written responses using computer programs is known as 'automated essay scoring' (AES). An AES system uses a computer program that builds a scoring model by extracting linguistic features from a constructed-response prompt that has been pre-scored by human raters and then, using machine learning algorithms, maps the linguistic features to the human scores so that the computer can be used to classify (i.e. score or grade) the responses of a new group of students. The accuracy of the score classification can be evaluated using different measures of agreement. Automated essay scoring provides a method for scoring constructed-response tests that complements the current use of selected-response testing in medical education. The method can serve medical educators by providing the summative scores required for high-stakes testing. It can also serve medical students by providing them with detailed feedback as part of a formative assessment process. Automated essay scoring systems yield scores that consistently agree with those of human raters at a level as high, if not higher, as the level of agreement among human raters themselves. The system offers medical educators many benefits for scoring constructed-response tasks, such as improving the consistency of scoring, reducing the time required for scoring and reporting, minimising the costs of scoring, and providing students with immediate feedback on constructed-response tasks. © 2014

  10. TUW at the First Total Recall Track

    Science.gov (United States)

    2015-11-20

    TUW AT THE FIRST TOTAL RECALL TRACK MIHAI LUPU Abstract. For the first participation in the TREC Total Recall track, we set out to try some basic...significantly and consistently outperformed it. 1. Introduction As the organizers point out, the focus of the Total Recall Track is to evaluate methods to...TUW AT THE FIRST TOTAL RECALL TRACK 3 The only change we made was at a higher level. The Sofia ML library provides 5 more ML algorithms. The following

  11. A diagnostic scoring system for myxedema coma.

    Science.gov (United States)

    Popoveniuc, Geanina; Chandra, Tanu; Sud, Anchal; Sharma, Meeta; Blackman, Marc R; Burman, Kenneth D; Mete, Mihriye; Desale, Sameer; Wartofsky, Leonard

    2014-08-01

    To develop diagnostic criteria for myxedema coma (MC), a decompensated state of extreme hypothyroidism with a high mortality rate if untreated, in order to facilitate its early recognition and treatment. The frequencies of characteristics associated with MC were assessed retrospectively in patients from our institutions in order to derive a semiquantitative diagnostic point scale that was further applied on selected patients whose data were retrieved from the literature. Logistic regression analysis was used to test the predictive power of the score. Receiver operating characteristic (ROC) curve analysis was performed to test the discriminative power of the score. Of the 21 patients examined, 7 were reclassified as not having MC (non-MC), and they were used as controls. The scoring system included a composite of alterations of thermoregulatory, central nervous, cardiovascular, gastrointestinal, and metabolic systems, and presence or absence of a precipitating event. All 14 of our MC patients had a score of ≥60, whereas 6 of 7 non-MC patients had scores of 25 to 50. A total of 16 of 22 MC patients whose data were retrieved from the literature had a score ≥60, and 6 of 22 of these patients scored between 45 and 55. The odds ratio per each score unit increase as a continuum was 1.09 (95% confidence interval [CI], 1.01 to 1.16; P = .019); a score of 60 identified coma, with an odds ratio of 1.22. The area under the ROC curve was 0.88 (95% CI, 0.65 to 1.00), and the score of 60 had 100% sensitivity and 85.71% specificity. A score ≥60 in the proposed scoring system is potentially diagnostic for MC, whereas scores between 45 and 59 could classify patients at risk for MC.

  12. Dual Kidney Allocation Score: A Novel Algorithm Utilizing Expanded Donor Criteria for the Allocation of Dual Kidneys in Adults.

    Science.gov (United States)

    Johnson, Adam P; Price, Thea P; Lieby, Benjamin; Doria, Cataldo

    2016-09-08

    BACKGROUND Dual kidney transplantation (DKT) of expanded-criteria donors is a cost-intensive procedure that aims to increase the pool of available deceased organ donors and has demonstrated equivalent outcomes to expanded-criteria single kidney transplantation (eSKT). The objective of this study was to develop an allocation score based on predicted graft survival from historical dual and single kidney donors. MATERIAL AND METHODS We analyzed United Network for Organ Sharing (UNOS) data for 1547 DKT and 26 381 eSKT performed between January 1994 and September 2013. We utilized multivariable Cox regression to identify variables independently associated with graft survival in dual and single kidney transplantations. We then derived a weighted multivariable product score from calculated hazard ratios to model the benefit of transplantation as dual kidneys. RESULTS Of 36 donor variables known at the time of listing, 13 were significantly associated with graft survival. The derived dual allocation score demonstrated good internal validity with strong correlation to improved survival in dual kidney transplants. Donors with scores less than 2.1 transplanted as dual kidneys had a worsened median survival of 594 days (24%, p-value 0.031) and donors with scores greater than 3.9 had improved median survival of 1107 days (71%, p-value 0.002). There were 17 733 eSKT (67%) and 1051 DKT (67%) with scores in between these values and no differences in survival (p-values 0.676 and 0.185). CONCLUSIONS We have derived a dual kidney allocation score (DKAS) with good internal validity. Future prospective studies will be required to demonstrate external validity, but this score may help to standardize organ allocation for dual kidney transplantation.

  13. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

  14. Genomic multiple sequence alignments: refinement using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Lefkowitz Elliot J

    2005-08-01

    Full Text Available Abstract Background Genomic sequence data cannot be fully appreciated in isolation. Comparative genomics – the practice of comparing genomic sequences from different species – plays an increasingly important role in understanding the genotypic differences between species that result in phenotypic differences as well as in revealing patterns of evolutionary relationships. One of the major challenges in comparative genomics is producing a high-quality alignment between two or more related genomic sequences. In recent years, a number of tools have been developed for aligning large genomic sequences. Most utilize heuristic strategies to identify a series of strong sequence similarities, which are then used as anchors to align the regions between the anchor points. The resulting alignment is globally correct, but in many cases is suboptimal locally. We describe a new program, GenAlignRefine, which improves the overall quality of global multiple alignments by using a genetic algorithm to improve local regions of alignment. Regions of low quality are identified, realigned using the program T-Coffee, and then refined using a genetic algorithm. Because a better COFFEE (Consistency based Objective Function For alignmEnt Evaluation score generally reflects greater alignment quality, the algorithm searches for an alignment that yields a better COFFEE score. To improve the intrinsic slowness of the genetic algorithm, GenAlignRefine was implemented as a parallel, cluster-based program. Results We tested the GenAlignRefine algorithm by running it on a Linux cluster to refine sequences from a simulation, as well as refine a multiple alignment of 15 Orthopoxvirus genomic sequences approximately 260,000 nucleotides in length that initially had been aligned by Multi-LAGAN. It took approximately 150 minutes for a 40-processor Linux cluster to optimize some 200 fuzzy (poorly aligned regions of the orthopoxvirus alignment. Overall sequence identity increased only

  15. Dutch validation of the low anterior resection syndrome score.

    Science.gov (United States)

    Hupkens, B J P; Breukink, S O; Olde Reuver Of Briel, C; Tanis, P J; de Noo, M E; van Duijvendijk, P; van Westreenen, H L; Dekker, J W T; Chen, T Y T; Juul, T

    2018-04-21

    The aim of this study was to validate the Dutch translation of the low anterior resection syndrome (LARS) score in a population of Dutch rectal cancer patients. Patients who underwent surgery for rectal cancer received the LARS score questionnaire, a single quality of life (QoL) category question and the European Organization for Research and Treatment of Cancer (EORTC) QLQ-C30 questionnaire. A subgroup of patients received the LARS score twice to assess the test-retest reliability. A total of 165 patients were included in the analysis, identified in six Dutch centres. The response rate was 62.0%. The percentage of patients who reported 'major LARS' was 59.4%. There was a high proportion of patients with a perfect or moderate fit between the QoL category question and the LARS score, showing a good convergent validity. The LARS score was able to discriminate between patients with or without neoadjuvant radiotherapy (P = 0.003), between total and partial mesorectal excision (P = 0.008) and between age groups (P = 0.039). There was a statistically significant association between a higher LARS score and an impaired function on the global QoL subscale and the physical, role, emotional and social functioning subscales of the EORTC QLQ-C30 questionnaire. The test-retest reliability of the LARS score was good, with an interclass correlation coefficient of 0.79. The good psychometric properties of the Dutch version of the LARS score are comparable overall to the earlier validations in other countries. Therefore, the Dutch translation can be considered to be a valid tool for assessing LARS in Dutch rectal cancer patients. Colorectal Disease © 2018 The Association of Coloproctology of Great Britain and Ireland.

  16. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  17. The Motivated Strategies for Learning Questionnaire: score validity among medicine residents.

    Science.gov (United States)

    Cook, David A; Thompson, Warren G; Thomas, Kris G

    2011-12-01

    The Motivated Strategies for Learning Questionnaire (MSLQ) purports to measure motivation using the expectancy-value model. Although it is widely used in other fields, this instrument has received little study in health professions education. The purpose of this study was to evaluate the validity of MSLQ scores. We conducted a validity study evaluating the relationships of MSLQ scores to other variables and their internal structure (reliability and factor analysis). Participants included 210 internal medicine and family medicine residents participating in a web-based course on ambulatory medicine at an academic medical centre. Measurements included pre-course MSLQ scores, pre- and post-module motivation surveys, post-module knowledge test and post-module Instructional Materials Motivation Survey (IMMS) scores. Internal consistency was universally high for all MSLQ items together (Cronbach's α = 0.93) and for each domain (α ≥ 0.67). Total MSLQ scores showed statistically significant positive associations with post-test knowledge scores. For example, a 1-point rise in total MSLQ score was associated with a 4.4% increase in post-test scores (β = 4.4; p motivation and satisfaction. Scores on MSLQ domains demonstrated associations that generally aligned with our hypotheses. Self-efficacy and control of learning belief scores demonstrated the strongest domain-specific relationships with knowledge scores (β = 2.9 for both). Confirmatory factor analysis showed a borderline model fit. Follow-up exploratory factor analysis revealed the scores of five factors (self-efficacy, intrinsic interest, test anxiety, extrinsic goals, attribution) demonstrated psychometric and predictive properties similar to those of the original scales. Scores on the MSLQ are reliable and predict meaningful outcomes. However, the factor structure suggests a simplified model might better fit the empiric data. Future research might consider how assessing and responding to motivation could enhance

  18. A botulinum toxin A treatment algorithm for de novo management of torticollis and laterocollis

    Science.gov (United States)

    Kupsch, Andreas; Müngersdorf, Martina; Paus, Sebastian; Stenner, Andrea; Jost, Wolfgang

    2011-01-01

    Objectives Few studies have investigated the injection patterns for botulinum toxin type A for the treatment of heterogeneous forms of cervical dystonia (CD). This large, prospective, open-label, multicentre study aimed to evaluate the effectiveness and safety of 500 U botulinum toxin A for the initial treatment according to a standardised algorithm of the two most frequent forms of CD, predominantly torticollis and laterocollis. Design Patients (aged ≥18 years) with CD not previously treated with botulinum neurotoxin therapy were given one treatment with 500 U Dysport, according to a defined intramuscular injection algorithm based on clinical assessment of direction of head deviation, occurrence of shoulder elevation, occurrence of tremor (all evaluated using the Tsui rating scale) and hypertrophy of the sternocleidomastoid muscle. Results In this study, 516 patients were enrolled, the majority of whom (95.0%) completed treatment. Most patients had torticollis (78.1%). At week 4, mean Tsui scores had significantly decreased by −4.01, −3.76 and −4.09 points in the total, torticollis and laterocollis populations, respectively. Symptom improvement was equally effective between groups. Tsui scores remained significantly below baseline at week 12 in both groups. Treatment was well tolerated; the most frequent adverse events were muscular weakness (13.8%), dysphagia (9.9%) and neck pain (6.6%). Conclusions Dysport 500 U is effective and well tolerated for the de novo management of a range of heterogeneous forms of CD, when using a standardised regimen that allows tailored dosing based on individual symptom assessment. Clinical trials information (NCT00447772; clinicaltrials.gov) PMID:22021883

  19. Dichotomous scoring of Trails B in patients referred for a dementia evaluation.

    Science.gov (United States)

    Schmitt, Andrew L; Livingston, Ronald B; Smernoff, Eric N; Waits, Bethany L; Harris, James B; Davis, Kent M

    2010-04-01

    The Trail Making Test is a popular neuropsychological test and its interpretation has traditionally used time-based scores. This study examined an alternative approach to scoring that is simply based on the examinees' ability to complete the test. If an examinee is able to complete Trails B successfully, they are coded as "completers"; if not, they are coded as "noncompleters." To assess this approach to scoring Trails B, the performance of 97 diagnostically heterogeneous individuals referred for a dementia evaluation was examined. In this sample, 55 individuals successfully completed Trails B and 42 individuals were unable to complete it. Point-biserial correlations indicated a moderate-to-strong association (r(pb)=.73) between the Trails B completion variable and the Total Scale score of the Repeatable Battery for the Assessment of Neurological Status (RBANS), which was larger than the correlation between the Trails B time-based score and the RBANS Total Scale score (r(pb)=.60). As a screen for dementia status, Trails B completion showed a sensitivity of 69% and a specificity of 100% in this sample. These results suggest that dichotomous scoring of Trails B might provide a brief and clinically useful measure of dementia status.

  20. An Objective Balance Error Scoring System for Sideline Concussion Evaluation Using Duplex Kinect Sensors

    Directory of Open Access Journals (Sweden)

    Mengqi Zhu

    2017-10-01

    Full Text Available Sports-related concussion is a common sports injury that might induce potential long-term consequences without early diagnosis and intervention in the field. However, there are few options of such sensor systems available. The aim of the study is to propose and validate an automated concussion administration and scoring approach, which is objective, affordable and capable of detecting all balance errors required by the balance error scoring system (BESS protocol in the field condition. Our approach is first to capture human body skeleton positions using two Microsoft Kinect sensors in the proposed configuration and merge the data by a custom-made algorithm to remove the self-occlusion of limbs. The standing balance errors according to BESS protocol were further measured and accessed automatically by the proposed algorithm. Simultaneously, the BESS test was filmed for scoring by an experienced rater. Two results were compared using Pearson coefficient r, obtaining an excellent consistency (r = 0.93, p < 0.05. In addition, BESS test–retest was performed after seven days and compared using intraclass correlation coefficients (ICC, showing a good test–retest reliability (ICC = 0.81, p < 0.01. The proposed approach could be an alternative of objective tools to assess postural stability for sideline sports concussion diagnosis.

  1. Fostering dental student self-assessment of knowledge by confidence scoring of multiple-choice examinations.

    Science.gov (United States)

    McMahan, C Alex; Pinckard, R Neal; Jones, Anne Cale; Hendricson, William D

    2014-12-01

    Creating a learning environment that fosters student acquisition of self-assessment behaviors and skills is critically important in the education and training of health professionals. Self-assessment is a vital component of competent practice and lifelong learning. This article proposes applying a version of confidence scoring of multiple-choice questions as one avenue to address this crucial educational objective for students to be able to recognize and admit what they do not know. The confidence scoring algorithm assigns one point for a correct answer, deducts fractional points for an incorrect answer, but rewards students fractional points for leaving the question unanswered in admission that they are unsure of the correct answer. The magnitude of the reward relative to the deduction is selected such that the expected gain due to random guessing, even after elimination of all but one distractor, is never greater than the reward. Curricular implementation of this confidence scoring algorithm should motivate health professions students to develop self-assessment behaviors and enable them to acquire the skills necessary to critically evaluate the extent of their current knowledge throughout their professional careers. This is a professional development competency that is emphasized in the educational standards of the Commission on Dental Accreditation (CODA).

  2. Is there a relation between priapism occurring after penile doppler ultrasonography and international erectile function index score and erection hardness score levels?

    Science.gov (United States)

    Sönmez, Mehmet Giray; Öztürk, Ahmet

    2017-12-01

    The relation between Erection Hardness Score (EHS) and The International Erectile Function Index (IIEF) Questionnaire- Erectile Function Domain Score (IIEF-EF score) used in erectile dysfunction (ED) evaluation and the prevalence of priapism after penile Doppler ultrasonography (PDU) was examined in this study. A total of 62 patients who had PDU were included in the study. Patients were divided into two groups; there were 33 patients in IIEF-EF score ≤10, EHS 10, EHS ≥2 group (Group 2). The two groups separated according to their scores were compared for age, body mass index (BMI), prevalence of priapism, vascular comorbidities and duration of erection. When compared to Group 2, median age, rate of vascular comorbidities rate and BMI were detected to be higher in Group 1 with IIEF-EF score ≤10 and EHS 10 and EHS ≥2 (p<0.001, p=0.027, p=0.049 respectively). High IIEF-EF and EHS scores, younger ages and lower rates of vascular comorbidities in patients from whom PDU was demanded increase the prevalence of priapism.

  3. Comparison of Firefly algorithm and Artificial Immune System algorithm for lot streaming in -machine flow shop scheduling

    Directory of Open Access Journals (Sweden)

    G. Vijay Chakaravarthy

    2012-11-01

    Full Text Available Lot streaming is a technique used to split the processing of lots into several sublots (transfer batches to allow the overlapping of operations in a multistage manufacturing systems thereby shortening the production time (makespan. The objective of this paper is to minimize the makespan and total flow time of -job, -machine lot streaming problem in a flow shop with equal and variable size sublots and also to determine the optimal sublot size. In recent times researchers are concentrating and applying intelligent heuristics to solve flow shop problems with lot streaming. In this research, Firefly Algorithm (FA and Artificial Immune System (AIS algorithms are used to solve the problem. The results obtained by the proposed algorithms are also compared with the performance of other worked out traditional heuristics. The computational results shows that the identified algorithms are more efficient, effective and better than the algorithms already tested for this problem.

  4. An improved multi-domain convolution tracking algorithm

    Science.gov (United States)

    Sun, Xin; Wang, Haiying; Zeng, Yingsen

    2018-04-01

    Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.

  5. Noise reduction technology reduces radiation dose in chronic total occlusions percutaneous coronary intervention: a propensity score-matched analysis.

    Science.gov (United States)

    Maccagni, Davide; Benincasa, Susanna; Bellini, Barbara; Candilio, Luciano; Poletti, Enrico; Carlino, Mauro; Colombo, Antonio; Azzalini, Lorenzo

    2018-03-23

    Chronic total occlusions (CTO) percutaneous coronary intervention (PCI) is associated with high radiation dose. Our study aim was to evaluate the impact of the implementation of a noise reduction technology (NRT) on patient radiation dose during CTO PCI. A total of 187 CTO PCIs performed between February 2016 and May 2017 were analyzed according to the angiographic systems utilized: Standard (n = 60) versus NRT (n = 127). Propensity score matching (PSM) was performed to control for differences in baseline characteristics. Primary endpoints were Cumulative Air Kerma at Interventional Reference Point (AK at IRP), which correlates with patient's tissue reactions; and Kerma Area Product (KAP), a surrogate measure of patient's risk of stochastic radiation effects. An Efficiency Index (defined as fluoroscopy time/AK at IRP) was calculated for each procedure. Image quality was evaluated using a 5-grade Likert-like scale. After PSM, n = 55 pairs were identified. Baseline and angiographic characteristics were well matched between groups. Compared to the Standard system, NRT was associated with lower AK at IRP [2.38 (1.80-3.66) vs. 3.24 (2.04-5.09) Gy, p = 0.035], a trend towards reduction for KAP [161 (93-244) vs. 203 (136-363) Gycm 2 , p = 0.069], and a better Efficiency Index [16.75 (12.73-26.27) vs. 13.58 (9.92-17.63) min/Gy, p = 0.003]. Image quality was similar between the two groups (4.39 ± 0.53 Standard vs. 4.34 ± 0.47 NRT, p = 0.571). In conclusion, compared with a Standard system, the use of NRT in CTO PCI is associated with lower patient radiation dose and similar image quality.

  6. Translation and validation of the new version of the Knee Society Score - The 2011 KS Score - into Brazilian Portuguese.

    Science.gov (United States)

    Silva, Adriana Lucia Pastore E; Croci, Alberto Tesconi; Gobbi, Riccardo Gomes; Hinckel, Betina Bremer; Pecora, José Ricardo; Demange, Marco Kawamura

    2017-01-01

    Translation, cultural adaptation, and validation of the new version of the Knee Society Score - The 2011 KS Score - into Brazilian Portuguese and verification of its measurement properties, reproducibility, and validity. In 2012, the new version of the Knee Society Score was developed and validated. This scale comprises four separate subscales: (a) objective knee score (seven items: 100 points); (b) patient satisfaction score (five items: 40 points); (c) patient expectations score (three items: 15 points); and (d) functional activity score (19 items: 100 points). A total of 90 patients aged 55-85 years were evaluated in a clinical cross-sectional study. The pre-operative translated version was applied to patients with TKA referral, and the post-operative translated version was applied to patients who underwent TKA. Each patient answered the same questionnaire twice and was evaluated by two experts in orthopedic knee surgery. Evaluations were performed pre-operatively and three, six, or 12 months post-operatively. The reliability of the questionnaire was evaluated using the intraclass correlation coefficient (ICC) between the two applications. Internal consistency was evaluated using Cronbach's alpha. The ICC found no difference between the means of the pre-operative, three-month, and six-month post-operative evaluations between sub-scale items. The Brazilian Portuguese version of The 2011 KS Score is a valid and reliable instrument for objective and subjective evaluation of the functionality of Brazilian patients who undergo TKA and revision TKA.

  7. The BRICS (Bronchiectasis Radiologically Indexed CT Score): A Multicenter Study Score for Use in Idiopathic and Postinfective Bronchiectasis.

    Science.gov (United States)

    Bedi, Pallavi; Chalmers, James D; Goeminne, Pieter C; Mai, Cindy; Saravanamuthu, Pira; Velu, Prasad Palani; Cartlidge, Manjit K; Loebinger, Michael R; Jacob, Joe; Kamal, Faisal; Schembri, Nicola; Aliberti, Stefano; Hill, Uta; Harrison, Mike; Johnson, Christopher; Screaton, Nicholas; Haworth, Charles; Polverino, Eva; Rosales, Edmundo; Torres, Antoni; Benegas, Michael N; Rossi, Adriano G; Patel, Dilip; Hill, Adam T

    2018-05-01

    The goal of this study was to develop a simplified radiological score that could assess clinical disease severity in bronchiectasis. The Bronchiectasis Radiologically Indexed CT Score (BRICS) was devised based on a multivariable analysis of the Bhalla score and its ability in predicting clinical parameters of severity. The score was then externally validated in six centers in 302 patients. A total of 184 high-resolution CT scans were scored for the validation cohort. In a multiple logistic regression model, disease severity markers significantly associated with the Bhalla score were percent predicted FEV 1 , sputum purulence, and exacerbations requiring hospital admission. Components of the Bhalla score that were significantly associated with the disease severity markers were bronchial dilatation and number of bronchopulmonary segments with emphysema. The BRICS was developed with these two parameters. The receiver operating-characteristic curve values for BRICS in the derivation cohort were 0.79 for percent predicted FEV 1 , 0.71 for sputum purulence, and 0.75 for hospital admissions per year; these values were 0.81, 0.70, and 0.70, respectively, in the validation cohort. Sputum free neutrophil elastase activity was significantly elevated in the group with emphysema on CT imaging. A simplified CT scoring system can be used as an adjunct to clinical parameters to predict disease severity in patients with idiopathic and postinfective bronchiectasis. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  8. New reconstruction algorithm for digital breast tomosynthesis: better image quality for humans and computers.

    Science.gov (United States)

    Rodriguez-Ruiz, Alejandro; Teuwen, Jonas; Vreemann, Suzan; Bouwman, Ramona W; van Engen, Ruben E; Karssemeijer, Nico; Mann, Ritse M; Gubern-Merida, Albert; Sechopoulos, Ioannis

    2017-01-01

    Background The image quality of digital breast tomosynthesis (DBT) volumes depends greatly on the reconstruction algorithm. Purpose To compare two DBT reconstruction algorithms used by the Siemens Mammomat Inspiration system, filtered back projection (FBP), and FBP with iterative optimizations (EMPIRE), using qualitative analysis by human readers and detection performance of machine learning algorithms. Material and Methods Visual grading analysis was performed by four readers specialized in breast imaging who scored 100 cases reconstructed with both algorithms (70 lesions). Scoring (5-point scale: 1 = poor to 5 = excellent quality) was performed on presence of noise and artifacts, visualization of skin-line and Cooper's ligaments, contrast, and image quality, and, when present, lesion visibility. In parallel, a three-dimensional deep-learning convolutional neural network (3D-CNN) was trained (n = 259 patients, 51 positives with BI-RADS 3, 4, or 5 calcifications) and tested (n = 46 patients, nine positives), separately with FBP and EMPIRE volumes, to discriminate between samples with and without calcifications. The partial area under the receiver operating characteristic curve (pAUC) of each 3D-CNN was used for comparison. Results EMPIRE reconstructions showed better contrast (3.23 vs. 3.10, P = 0.010), image quality (3.22 vs. 3.03, P algorithm provides DBT volumes with better contrast and image quality, fewer artifacts, and improved visibility of calcifications for human observers, as well as improved detection performance with deep-learning algorithms.

  9. An optimized outlier detection algorithm for jury-based grading of engineering design projects

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Espensen, Christina; Clemmensen, Line Katrine Harder

    2016-01-01

    This work characterizes and optimizes an outlier detection algorithm to identify potentially invalid scores produced by jury members while grading engineering design projects. The paper describes the original algorithm and the associated adjudication process in detail. The impact of the various...... (the base rule and the three additional conditions) play a role in the algorithm's performance and should be included in the algorithm. Because there is significant interaction between the base rule and the additional conditions, many acceptable combinations that balance the FPR and FNR can be found......, but no true optimum seems to exist. The performance of the best optimizations and the original algorithm are similar. Therefore, it should be possible to choose new coefficient values for jury populations in other cultures and contexts logically and empirically without a full optimization as long...

  10. The entity-to-algorithm allocation problem: Extending the analysis

    CSIR Research Space (South Africa)

    Grobler, J

    2014-12-01

    Full Text Available . HYPOTHESES ANALYSIS OF ALTERNATIVE MULTI-METHOD ALGORITHMS. HMHH EIHH EEA-SLPS HMHH NA 4− 19− 5 11− 8− 9 EIHH 5− 19− 4 NA 6− 16− 6 EEA-SLPS 9− 8− 11 6− 16− 6 NA Multi-EA 3− 4− 21 3− 1− 24 2− 3− 23 Multi-EA TOTAL HMHH 21− 4− 3 36− 3− 17 EIHH 24− 1− 3 35− 36... ANALYSIS OF THE VARIOUS ALGORITHMS VERSUS THEIR CONSTITUENT ALGORITHMS. Algorithm HMHH EIHH EEA-SLPS Multi-EA CMAES 0-3-25 4-2-22 4-2-22 2-2-24 SaNSDE 17-2-9 16-8-4 12-12-4 5-0-23 GA 22-3-3 23-2-3 23-4-1 4-5-19 GCPSO 20-1-7 20-3-5 19-3-6 8-3-17 TOTAL 55...

  11. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm.

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    Full Text Available Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS. Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models.In this study, two scoring functions (Bayesian network based K2-score and Gini-score are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models.We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR, specificity (SPC, positive predictive value (PPV and accuracy (ACC. Our method has identified two SNPs (rs3775652 and rs10511467 that may be also associated with disease in AMD dataset.

  12. A New Algorithm to Reduce the Incidence of Gastroesophageal Reflux Symptoms after Laparoscopic Sleeve Gastrectomy.

    Science.gov (United States)

    Ece, Ilhan; Yilmaz, Huseyin; Acar, Fahrettin; Colak, Bayram; Yormaz, Serdar; Sahin, Mustafa

    2017-06-01

    Laparoscopic sleeve gastrectomy (LSG) is one of the most prefered treatment option for morbid obesity. However, the effects of LSG on gastroesophageal reflux disease (GERD) are controversial. Asymptomatic GERD and hiatal hernia (HH) is common in obese patients. Therefore, it is important to identify the high risk patients prior to surgery. This study aims to evaluate efficacy of cruroplasty for HH during LSG in morbidly obese patients using ambulatory pH monitoring (APM) results, and to investigate the patients' selection criteria for this procedure. This retrospective study includes outcomes of 59 patients who underwent LSG and HH repair according to our patient selection algorithm. Outcomes included preoperative GERD Health-Related Quality of Life (GERD-HRQL) questionnaire, APM results, percentage of postoperative excess weight loss, and total weight loss. For a total of 402 patients, APM was applied in 70 patients who had a positive score of GERD-HRQL, and 59 patients underwent LSG and concomitant HH repair who had a DeMeester score of 14.7% or above. There was no statistically significant difference in weight loss at 6 and 12-month follow-up. Two patients (3.3%) had symptoms of GERD at 12 months postoperatively, and only one (1.6%) patient required treatment of proton pump inhibitor for reflux. In the total cohort, 11 (2.7%) patients also evolved de novo GERD symptoms. This study confirm that careful attention to patient selection and surgical technique can reduce the symptoms of GERD at short-term. Routine bilateral crus exploration could be a major risk factor of postoperative GERD.

  13. Development and validation of the International Hidradenitis Suppurativa Severity Score System (IHS4), a novel dynamic scoring system to assess HS severity

    DEFF Research Database (Denmark)

    Zouboulis, C C; Tzellos, T; Kyrgidis, A

    2017-01-01

    BACKGROUND: A validated tool for the dynamic severity assessment of hidradenitis suppurativa/acne inversa (HS) is lacking. OBJECTIVES: To develop and validate a novel dynamic scoring system to assess the severity of HS. METHODS: A Delphi voting procedure was conducted among the members......, as well as examination for correlation (Spearman's rho) and agreement (Cohen's kappa) with existing scores, were engaged to recognize the variables for a new International HS4 (IHS4) that was established by a second Delphi round. RESULTS: Consensus HS4 was based on number of skin lesions, number of skin....... Three candidate scores were presented to the second Delphi round. The resulting IHS4 score is arrived at by the number of nodules (multiplied by 1) plus the number of abscesses (multiplied by 2) plus the number of draining tunnels (multiplied by 4). A total score of 3 or less signifies mild, 4...

  14. A 15-year review of midface reconstruction after total and subtotal maxillectomy: part I. Algorithm and outcomes.

    Science.gov (United States)

    Cordeiro, Peter G; Chen, Constance M

    2012-01-01

    Reconstruction of complex midfacial defects is best approached with a clear algorithm. The goals of reconstruction are functional and aesthetic. Over a 15-year period (1992 to 2006), a single surgeon (P.G.C.) performed 100 flaps to reconstruct the following midfacial defects: type I, limited maxillectomy (n = 20); type IIA, subtotal maxillectomy with resection of less than 50 percent of the palate (n = 8); type IIB, subtotal maxillectomy with resection of greater than 50 percent of the palate (n = 8); type IIIA, total maxillectomy with preservation of the orbital contents (n = 22); type IIIB, total maxillectomy with orbital exenteration (n = 23); and type IV, orbitomaxillectomy (n = 19). Free flaps were used in 94 cases (94 percent), and pedicled flaps were used in six (6 percent). One hundred flaps were performed in 96 patients (69 males, 72 percent; 27 females, 28 percent); four patients underwent a second flap reconstruction due to recurrent disease (n = 4, 4 percent). Average patient age was 49.2 years (range, 13 to 81 years). Free-flap survival was 100 percent, with one partial flap loss (1 percent). Five patients suffered systemic complications (5.2 percent), and four died within 30 days of hospitalization (4.2 percent). Over 50 percent of patients returned to normal diet and speech. Almost 60 percent were judged to have an excellent aesthetic result. Free-tissue transfer offers the most effective and reliable form of reconstruction for complex maxillectomy defects. Rectus abdominis and radial forearm free flaps in combination with immediate bone grafting or as osteocutaneous flaps consistently provide the best functional and aesthetic results. Therapeutic, IV.

  15. The effect of Scratch environment on student’s achievement in teaching algorithm

    Directory of Open Access Journals (Sweden)

    Mehmet Tekerek

    2014-08-01

    Full Text Available In this study, the effect of Scratch environment in teaching algorithm in elementary school 6th grade Information and Communication Technologies course was examined. The research method was experimental method. Control group, pretest-posttest design of experimental research method and a convenience sample consisting of 60 6th grade students were used. The research instrument was achievement test to determine the effect of Scratch on learning algorithm. During the implementation process experiment group studied using Scratch and control group studied with traditional methods. The data was analyzed using independent-samples t-test, paired-samples t-test and ANCOVA statistics. According to findings there is no statically significant difference between posttest achievement scores of experiment and control groups. Similarly, In terms of gender there isn’t a statically significant difference between posttest scores of experiment and control groups.

  16. Radionuclide identification algorithm for organic scintillator-based radiation portal monitor

    Energy Technology Data Exchange (ETDEWEB)

    Paff, Marc Gerrit, E-mail: mpaff@umich.edu; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.

    2017-03-21

    We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.

  17. Radionuclide identification algorithm for organic scintillator-based radiation portal monitor

    Science.gov (United States)

    Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.

    2017-03-01

    We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.

  18. Revisiting the utility of technical performance scores following tetralogy of Fallot repair.

    Science.gov (United States)

    Lodin, Daud; Mavrothalassitis, Orestes; Haberer, Kim; Sunderji, Sherzana; Quek, Ruben G W; Peyvandi, Shabnam; Moon-Grady, Anita; Karamlou, Tara

    2017-08-01

    Although an important quality metric, current technical performance scores may not be generalizable and may omit operative factors that influence outcomes. We examined factors not included in current technical performance scores that may contribute to increased postoperative length of stay, major complications, and cost after primary repair of tetralogy of Fallot. This is a retrospective single site study of patients younger than age 2 years with tetralogy of Fallot undergoing complete repair between 2007 and 2015. Medical record data and discharge echocardiograms were reviewed to ascertain component and composite technical performance scores. Primary outcomes included postoperative length of stay, major complications, and total hospital costs. Multivariable logistic and linear regression identified determinants of each outcome. Patient population (n = 115) had a median postoperative length of stay of 8 days (interquartile range, 6-10 days), and a median total cost of $71,147. Major complications occurred in 33 patients (29%) with 1 death. Technical performance scores assigned were optimum in 28 patients (25%), adequate in 59 patients (52%), and inadequate in 26 patients (23%). Neither technical performance score components nor composite scores were associated with increased postoperative length of stay. Optimum or adequate repairs versus inadequate had equal risk of a complication (P = .79), and equivalent mean total cost ($100,000 vs $187,000; P = .25). Longer cardiopulmonary bypass time per 1-minute increase (P technical performance scores were not associated with selected outcomes in our postoperative population. Although returning to bypass and bypass length are not included as components in the current score, these are important factors influencing complications and resource use in our population. Revisions anticipated from a prospective trial should consider including these variables. Copyright © 2017 The American Association for Thoracic Surgery

  19. The sum of friends’ and lovers’ self-control scores predicts relationship quality

    NARCIS (Netherlands)

    Vohs, K.D.; Finkenauer, C.; Baumeister, R.F.

    2011-01-01

    What combination of partners' trait self-control levels produces the best relationship outcomes? The authors tested three hypotheses-complementarity (large difference in trait self-control scores), similarity (small difference in self-control scores), and totality (large sum of self-control

  20. Virtual Non-Contrast CT Using Dual-Energy Spectral CT: Feasibility of Coronary Artery Calcium Scoring.

    Science.gov (United States)

    Song, Inyoung; Yi, Jeong Geun; Park, Jeong Hee; Kim, Sung Mok; Lee, Kyung Soo; Chung, Myung Jin

    2016-01-01

    To evaluate the feasibility of coronary artery calcium scoring based on three virtual noncontrast-enhanced (VNC) images derived from single-source spectral dual-energy CT (DECT) as compared with true noncontrast-enhanced (TNC) images. This prospective study was conducted with the approval of our Institutional Review Board. Ninety-seven patients underwent noncontrast CT followed by contrast-enhanced chest CT using single-source spectral DECT. Iodine eliminated VNC images were reconstructed using two kinds of 2-material decomposition algorithms (material density iodine-water pair [MDW], material density iodine-calcium pair [MDC]) and a material suppressed algorithm (material suppressed iodine [MSI]). Two readers independently quantified calcium on VNC and TNC images. The Spearman correlation coefficient test and Bland-Altman method were used for statistical analyses. Coronary artery calcium scores from all three VNC images showed excellent correlation with those from the TNC images (Spearman's correlation coefficient [ρ] = 0.94, 0.88, and 0.89 for MDW, MDC, and MSI, respectively; p VNC images also correlated well with those from TNC images (ρ = 0.92, 0.87, and 0.91 for MDW, MDC, and MSI, respectively; p VNC images, coronary calcium from MDW correlated best with that from TNC. The coronary artery calcium scores and volumes were significantly lower from the VNC images than from the TNC images (p VNC images from contrast-enhanced CT using dual-energy material decomposition/suppression is feasible for coronary calcium scoring. The absolute value from VNC tends to be smaller than that from TNC.

  1. All-cause mortality in asymptomatic persons with extensive Agatston scores above 1000.

    Science.gov (United States)

    Patel, Jaideep; Blaha, Michael J; McEvoy, John W; Qadir, Sadia; Tota-Maharaj, Rajesh; Shaw, Leslee J; Rumberger, John A; Callister, Tracy Q; Berman, Daniel S; Min, James K; Raggi, Paolo; Agatston, Arthur A; Blumenthal, Roger S; Budoff, Matthew J; Nasir, Khurram

    2014-01-01

    Risk assessment in the extensive calcified plaque phenotype has been limited by small sample size. We studied all-cause mortality rates among asymptomatic patients with markedly elevated Agatston scores > 1000. We studied a clinical cohort of 44,052 asymptomatic patients referred for coronary calcium scans. Mean follow-up was 5.6 years (range, 1-13 years). All-cause mortality rates were calculated after stratifying by Agatston score (0, 1-1000, 1001-1500, 1500-2000, and >2000). A multivariable Cox regression model adjusting for self-reported traditional risk factors was created to assess the relative mortality hazard of Agatston scores 1001 to 1500, 1501 to 2000, and >2000. With the use of post-estimation modeling, we assessed for the presence of an upper threshold of risk with high Agatston scores. A total of 1593 patients (4% of total population) had Agatston score > 1000. There was a continuous graded decrease in estimated 10-year survival across increasing Agatston score, continuing when Agatston score > 1000 (Agatston score 1001-1500, 78%; Agatston score 1501-2000, 74%; Agatston score > 2000, 51%). After multivariable adjustment, Agatston scores 1001 to 1500, 1501 to 2000, and >2000 were associated with an 8.05-, 7.45-, and 13.26-fold greater mortality risk, respectively, than for Agatston score of 0. Compared with Agatston score 1001 to 1500, Agatston score 1501 to 2000 had a similar all-cause mortality risk, whereas Agatston score > 2000 had an increased relative risk (Agatston score 1501-2000: hazard ratio [HR], 1.01 [95% CI, 0.67-1.51]; Agatston score > 2000: HR, 1.79 [95% CI, 1.30-2.46]). Graphical assessment of the predicted survival model suggests no upper threshold for risk associated with calcified plaque in coronary arteries. Increasing calcified plaque in coronary arteries continues to predict a graded decrease in survival among patients with extensive Agatston score > 1000 with no apparent upper threshold. Published by Elsevier Inc.

  2. SCALCE: boosting sequence compression algorithms using locally consistent encoding.

    Science.gov (United States)

    Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk

    2012-12-01

    provides up to 2.01 times better compression while improving the running time by a factor of 5.17. SCALCE also provides the option to compress the quality scores as well as the read names, in addition to the reads themselves. This is achieved by compressing the quality scores through order-3 Arithmetic Coding (AC) and the read names through gzip through the reordering SCALCE provides on the reads. This way, in comparison with gzip compression of the unordered FASTQ files (including reads, read names and quality scores), SCALCE (together with gzip and arithmetic encoding) can provide up to 3.34 improvement in the compression rate and 1.26 improvement in running time. Our algorithm, SCALCE (Sequence Compression Algorithm using Locally Consistent Encoding), is implemented in C++ with both gzip and bzip2 compression options. It also supports multithreading when gzip option is selected, and the pigz binary is available. It is available at http://scalce.sourceforge.net. fhach@cs.sfu.ca or cenk@cs.sfu.ca Supplementary data are available at Bioinformatics online.

  3. PROGNOSTIC ALGORITHM FOR DISEASE FLOW IN PULMONARY AND THORACIC LYMPH NODES SARCOIDOSIS

    Directory of Open Access Journals (Sweden)

    S. A. Terpigorev

    2014-01-01

    Full Text Available Background: Sarcoidosis is a systemic granulomatosis commonly affecting respiratory system. Variable and often unpredictable flow of the disease provides rationale for the development of prognostic algorithm. Aim: To detect predictive parameters in pulmonary and thoracic lymph nodes sarcoidosis; to develop prognostic algorithm. Materials and methods: The results of examination of 113 patients (85 women and 28 men, 19–77 years old with morphologically verified sarcoidosis has been assessed. Clinical manifestations, functional, radiographic (including CT numerical scores and morphological features of the disease were analyzed against 3-year outcomes in prednisolon/hydroxychloroquine-treated or treatment-naive patients. Results: Radiographic stage, CT-pattern scores, several parameters of pulmonary function tests (DLCO, RV, FEV1, FVC and dyspnoe had the greatest prognostic significance for disease flow. Prognostic accuracy was 87.8% and increased to 94.5% after one-year dynamics of symptoms was taken into account. Therapy with systemic glucocorticosteroids did not influence outcomes in sarcoidosis with asymptomatic enlargement of thoracic lymph nodes. Conclusion: We have developed an algorithm for prognosis assessment in pulmonary sarcoidosis. Taking into account the results of patients follow-up significantly improves the accuracy of the prognosis.

  4. Cross-cultural adaptation and validation of the Japanese version of the new Knee Society Scoring System for osteoarthritic knee with total knee arthroplasty.

    Science.gov (United States)

    Hamamoto, Yosuke; Ito, Hiromu; Furu, Moritoshi; Ishikawa, Masahiro; Azukizawa, Masayuki; Kuriyama, Shinichi; Nakamura, Shinichiro; Matsuda, Shuichi

    2015-09-01

    The purposes of this study were to translate the new Knee Society Score (KSS) into Japanese and to evaluate the construct and content validity, test-retest reliability, and internal consistency of the Japanese version of the new KSS. The Japanese version of the KSS was developed according to cross-cultural guidelines by using the "translation-back translation" method to ensure content validity. KSS data were then obtained from patients who had undergone total knee arthroplasty (TKA). The psychometric properties evaluated were as follows: for feasibility, response rate, and floor and ceiling effects; for construct validity, internal consistency using Cronbach's alpha, and correlations with quality of life. Construct validity was evaluated by using Spearman's correlation coefficient to quantify the correlation between the KSS and the Japanese version of the Oxford 12-item Knee Score or Short Form 36 Health Survey (SF-36) questionnaires. The Japanese version of the KSS was sent to 93 consecutive osteoarthritic patients who underwent primary TKA in our institution. Fifty-five patients completed the questionnaires and were included in this study. Neither a floor nor ceiling effect was observed. The reliability proved excellent in the majority of domains, with intraclass correlation coefficients of 0.65-0.88. Internal consistency, assessed by Cronbach's alpha, was good to excellent for all domains (0.78-0.94). All of the four domains of the KSS correlated significantly with the Oxford 12-item Knee Score. The activity and satisfaction domains of the KSS correlated significantly with all and the majority of subscales of the SF-36, respectively, whereas symptoms and expectation domains showed significant correlations only with bodily pain and vitality subscales and with the physical function, bodily pain, and vitality subscales, respectively. The Japanese version of the new KSS is a valid, reliable, and responsive instrument to capture subjective aspects of the functional

  5. Numerical algorithms for contact problems in linear elastostatics

    International Nuclear Information System (INIS)

    Barbosa, H.J.C.; Feijoo, R.A.

    1984-01-01

    In this work contact problems in linear elasticity are analysed by means of Finite Elements and Mathematical Programming Techniques. The principle of virtual work leads in this case to a variational inequality which in turn is equivalent, for Hookean materials and infinitesimal strains, to the minimization of the total potential energy over the set of all admissible virtual displacements. The use of Gauss-Seidel algorithm with relaxation and projection and also Lemke's algorithm and Uzawa's algorithm for solving the minimization problem is discussed. Finally numerical examples are presented. (Author) [pt

  6. A Coulomb collision algorithm for weighted particle simulations

    Science.gov (United States)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  7. Predictive values of D-dimer assay, GRACE scores and TIMI scores for adverse outcome in patients with non-ST-segment elevation myocardial infarction

    Directory of Open Access Journals (Sweden)

    Satilmisoglu MH

    2017-03-01

    Full Text Available Muhammet Hulusi Satilmisoglu,1 Sinem Ozbay Ozyilmaz,1 Mehmet Gul,1 Hayriye Ak Yildirim,2 Osman Kayapinar,3 Kadir Gokturk,4 Huseyin Aksu,1 Korhan Erkanli,5 Abdurrahman Eksik1 1Department of Cardiology, 2Department of Biochemistry, Mehmet Akif Ersoy Thoracic and Cardiovascular Surgery Training and Research Hospital, Istanbul, 3Department of Cardiology, Duzce University Faculty of Medicine, Duzce, 4Department of Infectious Diseases, 5Department of Thoracic and Cardiovascular Surgery, Mehmet Akif Ersoy Thoracic and Cardiovascular Surgery Training and Research Hospital, Istanbul, Turkey Purpose: To determine the predictive values of D-dimer assay, Global Registry of Acute Coronary Events (GRACE and Thrombolysis in Myocardial Infarction (TIMI risk scores for adverse outcome in patients with non-ST-segment elevation myocardial infarction (NSTEMI.Patients and methods: A total of 234 patients (mean age: 57.2±11.7 years, 75.2% were males hospitalized with NSTEMI were included. Data on D-dimer assay, GRACE and TIMI risk scores were recorded. Logistic regression analysis was conducted to determine the risk factors predicting increased mortality.Results: Median D-dimer levels were 349.5 (48.0–7,210.0 ng/mL, the average TIMI score was 3.2±1.2 and the GRACE score was 90.4±27.6 with high GRACE scores (>118 in 17.5% of patients. The GRACE score was correlated positively with both the D-dimer assay (r=0.215, P=0.01 and TIMI scores (r=0.504, P=0.000. Multivariate logistic regression analysis revealed that higher creatinine levels (odds ratio =18.465, 95% confidence interval: 1.059–322.084, P=0.046 constituted the only significant predictor of increased mortality risk with no predictive values for age, D-dimer assay, ejection fraction, glucose, hemoglobin A1c, sodium, albumin or total cholesterol levels for mortality.Conclusion: Serum creatinine levels constituted the sole independent determinant of mortality risk, with no significant values for D

  8. The R.I.R.S. scoring system: An innovative scoring system for predicting stone-free rate following retrograde intrarenal surgery.

    Science.gov (United States)

    Xiao, Yinglong; Li, Deng; Chen, Lei; Xu, Yaoting; Zhang, Dingguo; Shao, Yi; Lu, Jun

    2017-11-21

    To establish and internally validate an innovative R.I.R.S. scoring system that allows urologists to preoperatively estimate the stone-free rate (SFR) after retrograde intrarenal surgery (RIRS). This study included 382 eligible samples from a total 573 patients who underwent RIRS from January 2014 to December 2016. Four reproducible factors in the R.I.R.S. scoring system, including renal stone density, inferior pole stone, renal infundibular length and stone burden, were measured based on preoperative computed tomography of urography to evaluate the possibility of stone clearance after RIRS. The median cumulative diameter of the stones was 14 mm, and the interquartile range was 10 to 21. The SFR on postoperative day 1 in the present cohort was 61.5% (235 of 382), and the final SFR after 1 month was 73.6% (281 of 382). We established an innovative scoring system to evaluate SFR after RIRS using four preoperative characteristics. The range of the R.I.R.S. scoring system was 4 to 10. The overall score showed a great significance of stone-free status (p R.I.R.S. scoring system was 0.904. The R.I.R.S. scoring system is associated with SFR after RIRS. This innovative scoring system can preoperatively assess treatment success after intrarenal surgery and can be used for preoperative surgical arrangement and comparisons of outcomes among different centers and within a center over time.

  9. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    Science.gov (United States)

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  10. Perturbation resilience and superiorization of iterative algorithms

    International Nuclear Information System (INIS)

    Censor, Y; Davidi, R; Herman, G T

    2010-01-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image

  11. The Performance of the Upper Limb scores correlate with pulmonary function test measures and Egen Klassifikation scores in Duchenne muscular dystrophy.

    Science.gov (United States)

    Lee, Ha Neul; Sawnani, Hemant; Horn, Paul S; Rybalsky, Irina; Relucio, Lani; Wong, Brenda L

    2016-01-01

    The Performance of the Upper Limb scale was developed as an outcome measure specifically for ambulant and non-ambulant patients with Duchenne muscular dystrophy and is implemented in clinical trials needing longitudinal data. The aim of this study is to determine whether this novel tool correlates with functional ability using pulmonary function test, cardiac function test and Egen Klassifikation scale scores as clinical measures. In this cross-sectional study, 43 non-ambulatory Duchenne males from ages 10 to 30 years and on long-term glucocorticoid treatment were enrolled. Cardiac and pulmonary function test results were analyzed to assess cardiopulmonary function, and Egen Klassifikation scores were analyzed to assess functional ability. The Performance of the Upper Limb scores correlated with pulmonary function measures and had inverse correlation with Egen Klassifikation scores. There was no correlation with left ventricular ejection fraction and left ventricular dysfunction. Body mass index and decreased joint range of motion affected total Performance of the Upper Limb scores and should be considered in clinical trial designs. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Gui Bo

    2008-01-01

    Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  13. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Bo Gui

    2007-12-01

    Full Text Available We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  14. A novel scoring system for predicting adherent placenta in women with placenta previa.

    Science.gov (United States)

    Tanimura, Kenji; Morizane, Mayumi; Deguchi, Masashi; Ebina, Yasuhiko; Tanaka, Utaru; Ueno, Yoshiko; Kitajima, Kazuhiro; Maeda, Tetsuo; Sugimura, Kazuro; Yamada, Hideto

    2018-04-01

    Placenta previa (PP) is one of the most significant risk factors for adherent placenta (AP). The aim of this study was to evaluate the diagnostic efficacy of a novel scoring system for predicting AP in pregnant women with PP. This prospective cohort study enrolled 175 women with PP. The placenta previa with adherent placenta score (PPAP score) is composed of 2 categories: (1) past history of cesarean section (CS), surgical abortion, and/or uterine surgery; and (2) ultrasonography and magnetic resonance imaging findings. Each category is graded as 0, 1, 2, or 4 points, yielding a total score between 0 and 24. When women with PP had PPAP score ≥8, they were considered to be at a high risk for AP and received placement of preoperative internal iliac artery occlusion balloon catheters. If they were found to have AP during CS, they underwent hysterectomy or placenta removal using advanced bipolar with balloon catheter occlusion. The predictive accuracy of PPAP score was evaluated. In total, 23 of the 175 women with PP were diagnosed as having AP, histopathologically or clinically. Twenty-one of 24 women with PPAP score ≥8 had AP, whereas two of 151 women with PPAP score PPAP scoring system may be useful for predicting AP in women with PP. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Characteristics of Youth Food Preparation in Low-Income, African American Homes: Associations with Healthy Eating Index Scores.

    Science.gov (United States)

    Sattler, Melissa; Hopkins, Laura; Anderson Steeves, Elizabeth; Cristello, Angelica; Mccloskey, Morgan; Gittelsohn, Joel; Hurley, Kristen

    2015-01-01

    This study explores food preparation behaviors, including types of food prepared, methods of preparation, and frequency of preparation of low-income urban African American youth ages 9-15 in Baltimore City (n = 289) and analyzes a potential association to diet quality as measured through Healthy Eating Index 2010 (HEI) scores. Overall, the youth prepared their own food 6.7 ± 0.33 times per week without significant differences between age groups or genders as measured through pairwise comparison of means. Cereal, noodles, and sandwiches were amongst the foods prepared most frequently. Linear regression analysis found youth food preparation frequency was not significantly associated with total HEI (p = 0.59), sodium (p = 0.58), empty calories (p = 0.96), or dairy scores (p = 0.12). Younger age was associated with higher total HEI scores (p = 0.012) and higher dairy scores (p = 0.01) and female gender was associated with higher total HEI scores (p = 0.03), higher sodium scores (p = 0.03), and lower dairy scores (p = 0.008).

  16. SU-F-T-352: Development of a Knowledge Based Automatic Lung IMRT Planning Algorithm with Non-Coplanar Beams

    International Nuclear Information System (INIS)

    Zhu, W; Wu, Q; Yuan, L

    2016-01-01

    Purpose: To improve the robustness of a knowledge based automatic lung IMRT planning method and to further validate the reliability of this algorithm by utilizing for the planning of clinical cases with non-coplanar beams. Methods: A lung IMRT planning method which automatically determines both plan optimization objectives and beam configurations with non-coplanar beams has been reported previously. A beam efficiency index map is constructed to guide beam angle selection in this algorithm. This index takes into account both the dose contributions from individual beams and the combined effect of multiple beams which is represented by a beam separation score. We studied the effect of this beam separation score on plan quality and determined the optimal weight for this score.14 clinical plans were re-planned with the knowledge-based algorithm. Significant dosimetric metrics for the PTV and OARs in the automatic plans are compared with those in the clinical plans by the two-sample t-test. In addition, a composite dosimetric quality index was defined to obtain the relationship between the plan quality and the beam separation score. Results: On average, we observed more than 15% reduction on conformity index and homogeneity index for PTV and V_4_0, V_6_0 for heart while an 8% and 3% increase on V_5, V_2_0 for lungs, respectively. The variation curve of the composite index as a function of angle spread score shows that 0.6 is the best value for the weight of the beam separation score. Conclusion: Optimal value for beam angle spread score in automatic lung IMRT planning is obtained. With this value, model can result in statistically the “best” achievable plans. This method can potentially improve the quality and planning efficiency for IMRT plans with no-coplanar angles.

  17. Conversion Between Mini-Mental State Examination, Montreal Cognitive Assessment, and Dementia Rating Scale-2 Scores in Parkinson’s Disease

    Science.gov (United States)

    van Steenoven, Inger; Aarsland, Dag; Hurtig, Howard; Chen-Plotkin, Alice; Duda, John E.; Rick, Jacqueline; Chahine, Lama M.; Dahodwala, Nabila; Trojanowski, John Q.; Roalf, David R.; Moberg, Paul J.; Weintraub, Daniel

    2015-01-01

    Cognitive impairment is one of the earliest, most common, and most disabling non-motor symptoms in Parkinson’s disease (PD). Thus, routine screening of global cognitive abilities is important for the optimal management of PD patients. Few global cognitive screening instruments have been developed for or validated in PD patients. The Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Dementia Rating Scale-2 (DRS-2) have been used extensively for cognitive screening in both clinical and research settings. Determining how to convert the scores between instruments would facilitate the longitudinal assessment of cognition in clinical settings and the comparison and synthesis of cognitive data in multicenter and longitudinal cohort studies. The primary aim of this study was to apply a simple and reliable algorithm for the conversion of MoCA to MMSE scores in PD patients. A secondary aim was to apply this algorithm for the conversion of DRS-2 to both MMSE and MoCA scores. The cognitive performance of a convenience sample of 360 patients with idiopathic PD was assessed by at least two of these cognitive screening instruments. We then developed conversion scores between the MMSE, MoCA, and DRS-2 using equipercentile equating and log-linear smoothing. The conversion score tables reported here enable direct and easy comparison of three routinely used cognitive screening assessments in PD patients. PMID:25381961

  18. An empirical study using range of motion and pain score as determinants for continuous passive motion: outcomes following total knee replacement surgery in an adult population.

    Science.gov (United States)

    Tabor, Danielle

    2013-01-01

    The continuous passive motion (CPM) machine is one means by which to rehabilitate the knee after total knee replacement surgery. This study sought to determine which total knee replacement patients, if any, benefit from the use of the CPM machine. For the study period, most patients received active physical therapy. Patients were placed in the CPM machine if, on postoperative day 1, they had a range of motion less than or equal to 45° and/or pain score of 8 or greater on a numeric rating scale of 0-10, 0 being no pain and 10 being the worst pain. Both groups of patients healed at similar rates. The incidence of adverse events, length of stay, and functional outcomes was comparable between groups. Given the demonstrated lack of relative benefit to the patient and the cost of the CPM, this study supported discontinuing the routine use of the CPM.

  19. Evaluation of the Environmental Scoring System in Multiple Child Asthma Intervention Programs in Boston, Massachusetts.

    Science.gov (United States)

    Dong, Zhao; Nath, Anjali; Guo, Jing; Bhaumik, Urmi; Chin, May Y; Dong, Sherry; Marshall, Erica; Murphy, Johnna S; Sandel, Megan T; Sommer, Susan J; Ursprung, W W Sanouri; Woods, Elizabeth R; Reid, Margaret; Adamkiewicz, Gary

    2018-01-01

    To test the applicability of the Environmental Scoring System, a quick and simple approach for quantitatively measuring environmental triggers collected during home visits, and to evaluate its contribution to improving asthma outcomes among various child asthma programs. We pooled and analyzed data from multiple child asthma programs in the Greater Boston Area, Massachusetts, collected in 2011 to 2016, to examine the association of environmental scores (ES) with measures of asthma outcomes and compare the results across programs. Our analysis showed that demographics were important contributors to variability in asthma outcomes and total ES, and largely explained the differences among programs at baseline. Among all programs in general, we found that asthma outcomes were significantly improved and total ES significantly reduced over visits, with the total Asthma Control Test score negatively associated with total ES. Our study demonstrated that the Environmental Scoring System is a useful tool for measuring home asthma triggers and can be applied regardless of program and survey designs, and that demographics of the target population may influence the improvement in asthma outcomes.

  20. High Baseline Postconcussion Symptom Scores and Concussion Outcomes in Athletes.

    Science.gov (United States)

    Custer, Aimee; Sufrinko, Alicia; Elbin, R J; Covassin, Tracey; Collins, Micky; Kontos, Anthony

    2016-02-01

    Some healthy athletes report high levels of baseline concussion symptoms, which may be attributable to several factors (eg, illness, personality, somaticizing). However, the role of baseline symptoms in outcomes after sport-related concussion (SRC) has not been empirically examined. To determine if athletes with high symptom scores at baseline performed worse than athletes without baseline symptoms on neurocognitive testing after SRC. Cohort study. High school and collegiate athletic programs. A total of 670 high school and collegiate athletes participated in the study. Participants were divided into groups with either no baseline symptoms (Postconcussion Symptom Scale [PCSS] score = 0, n = 247) or a high level of baseline symptoms (PCSS score > 18 [top 10% of sample], n = 68). Participants were evaluated at baseline and 2 to 7 days after SRC with the Immediate Post-concussion Assessment and Cognitive Test and PCSS. Outcome measures were Immediate Post-concussion Assessment and Cognitive Test composite scores (verbal memory, visual memory, visual motor processing speed, and reaction time) and total symptom score on the PCSS. The groups were compared using repeated-measures analyses of variance with Bonferroni correction to assess interactions between group and time for symptoms and neurocognitive impairment. The no-symptoms group represented 38% of the original sample, whereas the high-symptoms group represented 11% of the sample. The high-symptoms group experienced a larger decline from preinjury to postinjury than the no-symptoms group in verbal (P = .03) and visual memory (P = .05). However, total concussion-symptom scores increased from preinjury to postinjury for the no-symptoms group (P = .001) but remained stable for the high-symptoms group. Reported baseline symptoms may help identify athletes at risk for worse outcomes after SRC. Clinicians should examine baseline symptom levels to better identify patients for earlier referral and treatment for their

  1. Propensity score methodology for confounding control in health care utilization databases

    Directory of Open Access Journals (Sweden)

    Elisabetta Patorno

    2013-06-01

    Full Text Available Propensity score (PS methodology is a common approach to control for confounding in nonexperimental studies of treatment effects using health care utilization databases. This methodology offers researchers many advantages compared with conventional multivariate models: it directly focuses on the determinants of treatment choice, facilitating the understanding of the clinical decision-making process by the researcher; it allows for graphical comparisons of the distribution of propensity scores and truncation of subjects without overlapping PS indicating a lack of equipoise; it allows transparent assessment of the confounder balance achieved by the PS at baseline; and it offers a straightforward approach to reduce the dimensionality of sometimes large arrays of potential confounders in utilization databases, directly addressing the “curse of dimensionality” in the context of rare events. This article provides an overview of the use of propensity score methodology for pharmacoepidemiologic research with large health care utilization databases, covering recent discussions on covariate selection, the role of automated techniques for addressing unmeasurable confounding via proxies, strategies to maximize clinical equipoise at baseline, and the potential of machine-learning algorithms for optimized propensity score estimation. The appendix discusses the available software packages for PS methodology. Propensity scores are a frequently used and versatile tool for transparent and comprehensive adjustment of confounding in pharmacoepidemiology with large health care databases.

  2. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Upper-Lower Bounds Candidate Sets Searching Algorithm for Bayesian Network Structure Learning

    Directory of Open Access Journals (Sweden)

    Guangyi Liu

    2014-01-01

    Full Text Available Bayesian network is an important theoretical model in artificial intelligence field and also a powerful tool for processing uncertainty issues. Considering the slow convergence speed of current Bayesian network structure learning algorithms, a fast hybrid learning method is proposed in this paper. We start with further analysis of information provided by low-order conditional independence testing, and then two methods are given for constructing graph model of network, which is theoretically proved to be upper and lower bounds of the structure space of target network, so that candidate sets are given as a result; after that a search and scoring algorithm is operated based on the candidate sets to find the final structure of the network. Simulation results show that the algorithm proposed in this paper is more efficient than similar algorithms with the same learning precision.

  4. The Effectiveness of Neurofeedback Training in Algorithmic Thinking Skills Enhancement.

    Science.gov (United States)

    Plerou, Antonia; Vlamos, Panayiotis; Triantafillidis, Chris

    2017-01-01

    Although research on learning difficulties are overall in an advanced stage, studies related to algorithmic thinking difficulties are limited, since interest in this field has been recently raised. In this paper, an interactive evaluation screener enhanced with neurofeedback elements, referring to algorithmic tasks solving evaluation, is proposed. The effect of HCI, color, narration and neurofeedback elements effect was evaluated in the case of algorithmic tasks assessment. Results suggest the enhanced performance in the case of neurofeedback trained group in terms of total correct and optimal algorithmic tasks solution. Furthermore, findings suggest that skills, concerning the way that an algorithm is conceived, designed, applied and evaluated are essentially improved.

  5. Research on multirobot pursuit task allocation algorithm based on emotional cooperation factor.

    Science.gov (United States)

    Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo

    2014-01-01

    Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.

  6. Prosthetic alignment after total knee replacement is not associated with dissatisfaction or change in Oxford Knee Score: A multivariable regression analysis.

    Science.gov (United States)

    Huijbregts, Henricus J T A M; Khan, Riaz J K; Fick, Daniel P; Jarrett, Olivia M; Haebich, Samantha

    2016-06-01

    Approximately 18% of the patients are dissatisfied with the result of total knee replacement. However, the relation between dissatisfaction and prosthetic alignment has not been investigated before. We retrospectively analysed prospectively gathered data of all patients who had a primary TKR, preoperative and one-year postoperative Oxford Knee Scores (OKS) and postoperative computed tomography (CT). The CT protocol measures hip-knee-ankle (HKA) angle, and coronal, sagittal and axial component alignment. Satisfaction was defined using a five-item Likert scale. We dichotomised dissatisfaction by combining '(very) dissatisfied' and 'neutral/not sure'. Associations with dissatisfaction and change in OKS were calculated using multivariable logistic and linear regression models. 230 TKRs were implanted in 105 men and 106 women. At one year, 12% were (very) dissatisfied and 10% neutral. Coronal alignment of the femoral component was 0.5 degrees more accurate in patients who were satisfied at one year. The other alignment measurements were not different between satisfied and dissatisfied patients. All radiographic measurements had a P-value>0.10 on univariate analyses. At one year, dissatisfaction was associated with the three-months OKS. Change in OKS was associated with three-months OKS, preoperative physical SF-12, preoperative pain and cruciate retaining design. Neither mechanical axis, nor component alignment, is associated with dissatisfaction at one year following TKR. Patients get the best outcome when pain reduction and function improvement are optimal during the first three months and when the indication to embark on surgery is based on physical limitations rather than on a high pain score. 2. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. The Effect of English Language on Multiple Choice Question Scores of Thai Medical Students.

    Science.gov (United States)

    Phisalprapa, Pochamana; Muangkaew, Wayuda; Assanasen, Jintana; Kunavisarut, Tada; Thongngarm, Torpong; Ruchutrakool, Theera; Kobwanthanakun, Surapon; Dejsomritrutai, Wanchai

    2016-04-01

    Universities in Thailand are preparing for Thailand's integration into the ASEAN Economic Community (AEC) by increasing the number of tests in English language. English language is not the native language of Thailand Differences in English language proficiency may affect scores among test-takers, even when subject knowledge among test-takers is comparable and may falsely represent the knowledge level of the test-taker. To study the impact of English language multiple choice test questions on test scores of medical students. The final examination of fourth-year medical students completing internal medicine rotation contains 120 multiple choice questions (MCQ). The languages used on the test are Thai and English at a ratio of 3:1. Individual scores of tests taken in both languages were collected and the effect of English language on MCQ was analyzed Individual MCQ scores were then compared with individual student English language proficiency and student grade point average (GPA). Two hundred ninety five fourth-year medical students were enrolled. The mean percentage of MCQ scores in Thai and English were significantly different (65.0 ± 8.4 and 56.5 ± 12.4, respectively, p English was fair (Spearman's correlation coefficient = 0.41, p English than in Thai language. Students were classified into six grade categories (A, B+, B, C+, C, and D+), which cumulatively measured total internal medicine rotation performance score plus final examination score. MCQ scores from Thai language examination were more closely correlated with total course grades than were the scores from English language examination (Spearman's correlation coefficient = 0.73 (p English proficiency score was very high, at 3.71 ± 0.35 from a total of 4.00. Mean student GPA was 3.40 ± 0.33 from a possible 4.00. English language MCQ examination scores were more highly associated with GPA than with English language proficiency. The use of English language multiple choice question test may decrease scores

  8. Conversion between mini-mental state examination, montreal cognitive assessment, and dementia rating scale-2 scores in Parkinson's disease.

    Science.gov (United States)

    van Steenoven, Inger; Aarsland, Dag; Hurtig, Howard; Chen-Plotkin, Alice; Duda, John E; Rick, Jacqueline; Chahine, Lama M; Dahodwala, Nabila; Trojanowski, John Q; Roalf, David R; Moberg, Paul J; Weintraub, Daniel

    2014-12-01

    Cognitive impairment is one of the earliest, most common, and most disabling non-motor symptoms in Parkinson's disease (PD). Thus, routine screening of global cognitive abilities is important for the optimal management of PD patients. Few global cognitive screening instruments have been developed for or validated in PD patients. The Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Dementia Rating Scale-2 (DRS-2) have been used extensively for cognitive screening in both clinical and research settings. Determining how to convert the scores between instruments would facilitate the longitudinal assessment of cognition in clinical settings and the comparison and synthesis of cognitive data in multicenter and longitudinal cohort studies. The primary aim of this study was to apply a simple and reliable algorithm for the conversion of MoCA to MMSE scores in PD patients. A secondary aim was to apply this algorithm for the conversion of DRS-2 to both MMSE and MoCA scores. The cognitive performance of a convenience sample of 360 patients with idiopathic PD was assessed by at least two of these cognitive screening instruments. We then developed conversion scores between the MMSE, MoCA, and DRS-2 using equipercentile equating and log-linear smoothing. The conversion score tables reported here enable direct and easy comparison of three routinely used cognitive screening assessments in PD patients. © 2014 International Parkinson and Movement Disorder Society.

  9. DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.

    Science.gov (United States)

    Supratak, Akara; Dong, Hao; Wu, Chao; Guo, Yike

    2017-11-01

    This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.

  10. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  11. Recalibration of the ACC/AHA Risk Score in Two Population-Based German Cohorts.

    Science.gov (United States)

    de Las Heras Gala, Tonia; Geisel, Marie Henrike; Peters, Annette; Thorand, Barbara; Baumert, Jens; Lehmann, Nils; Jöckel, Karl-Heinz; Moebus, Susanne; Erbel, Raimund; Meisinger, Christine; Mahabadi, Amir Abbas; Koenig, Wolfgang

    2016-01-01

    The 2013 ACC/AHA guidelines introduced an algorithm for risk assessment of atherosclerotic cardiovascular disease (ASCVD) within 10 years. In Germany, risk assessment with the ESC SCORE is limited to cardiovascular mortality. Applicability of the novel ACC/AHA risk score to the German population has not yet been assessed. We therefore sought to recalibrate and evaluate the ACC/AHA risk score in two German cohorts and to compare it to the ESC SCORE. We studied 5,238 participants from the KORA surveys S3 (1994-1995) and S4 (1999-2001) and 4,208 subjects from the Heinz Nixdorf Recall (HNR) Study (2000-2003). There were 383 (7.3%) and 271 (6.4%) first non-fatal or fatal ASCVD events within 10 years in KORA and in HNR, respectively. Risk scores were evaluated in terms of calibration and discrimination performance. The original ACC/AHA risk score overestimated 10-year ASCVD rates by 37% in KORA and 66% in HNR. After recalibration, miscalibration diminished to 8% underestimation in KORA and 12% overestimation in HNR. Discrimination performance of the ACC/AHA risk score was not affected by the recalibration (KORA: C = 0.78, HNR: C = 0.74). The ESC SCORE overestimated by 5% in KORA and by 85% in HNR. The corresponding C-statistic was 0.82 in KORA and 0.76 in HNR. The recalibrated ACC/AHA risk score showed strongly improved calibration compared to the original ACC/AHA risk score. Predicting only cardiovascular mortality, discrimination performance of the commonly used ESC SCORE remained somewhat superior to the ACC/AHA risk score. Nevertheless, the recalibrated ACC/AHA risk score may provide a meaningful tool for estimating 10-year risk of fatal and non-fatal cardiovascular disease in Germany.

  12. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  13. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  14. A Prognostic Scoring Tool for Cesarean Organ/Space Surgical Site Infections: Derivation and Internal Validation.

    Science.gov (United States)

    Assawapalanggool, Srisuda; Kasatpibal, Nongyao; Sirichotiyakul, Supatra; Arora, Rajin; Suntornlimsiri, Watcharin

    Organ/space surgical site infections (SSIs) are serious complications after cesarean delivery. However, no scoring tool to predict these complications has yet been developed. This study sought to develop and validate a prognostic scoring tool for cesarean organ/space SSIs. Data for case and non-case of cesarean organ/space SSI between January 1, 2007 and December 31, 2012 from a tertiary care hospital in Thailand were analyzed. Stepwise multivariable logistic regression was used to select the best predictor combination and their coefficients were transformed to a risk scoring tool. The likelihood ratio of positive for each risk category and the area under receiver operating characteristic (AUROC) curves were analyzed on total scores. Internal validation using bootstrap re-sampling was tested for reproducibility. The predictors of 243 organ/space SSIs from 4,988 eligible cesarean delivery cases comprised the presence of foul-smelling amniotic fluid (four points), vaginal examination five or more times before incision (two points), wound class III or greater (two points), being referred from local setting (two points), hemoglobin less than 11 g/dL (one point), and ethnic minorities (one point). The likelihood ratio of cesarean organ/space SSIs with 95% confidence interval among low (total score of 0-1 point), medium (total score of 2-5 points), and high risk (total score of ≥6 points) categories were 0.11 (0.07-0.19), 1.03 (0.89-1.18), and 13.25 (10.87-16.14), respectively. Both AUROCs of the derivation and validation data were comparable (87.57% versus 86.08%; p = 0.418). This scoring tool showed a high predictive ability regarding cesarean organ/space SSIs on the derivation data and reproducibility was demonstrated on internal validation. It could assist practitioners prioritize patient care and management depending on risk category and decrease SSI rates in cesarean deliveries.

  15. The Pooling-score (P-score): inter- and intra-rater reliability in endoscopic assessment of the severity of dysphagia.

    Science.gov (United States)

    Farneti, D; Fattori, B; Nacci, A; Mancini, V; Simonelli, M; Ruoppolo, G; Genovese, E

    2014-04-01

    This study evaluated the intra- and inter-rater reliability of the Pooling score (P-score) in clinical endoscopic evaluation of severity of swallowing disorder, considering excess residue in the pharynx and larynx. The score (minimum 4 - maximum 11) is obtained by the sum of the scores given to the site of the bolus, the amount and ability to control residue/bolus pooling, the latter assessed on the basis of cough, raclage, number of dry voluntary or reflex swallowing acts ( 5). Four judges evaluated 30 short films of pharyngeal transit of 10 solid (1/4 of a cracker), 11 creamy (1 tablespoon of jam) and 9 liquid (1 tablespoon of 5 cc of water coloured with methlyene blue, 1 ml in 100 ml) boluses in 23 subjects (10 M/13 F, age from 31 to 76 yrs, mean age 58.56±11.76 years) with different pathologies. The films were randomly distributed on two CDs, which differed in terms of the sequence of the films, and were given to judges (after an explanatory session) at time 0, 24 hours later (time 1) and after 7 days (time 2). The inter- and intra-rater reliability of the P-score was calculated using the intra-class correlation coefficient (ICC; 3,k). The possibility that consistency of boluses could affect the scoring of the films was considered. The ICC for site, amount, management and the P-score total was found to be, respectively, 0.999, 0.997, 1.00 and 0.999. Clinical evaluation of a criterion of severity of a swallowing disorder remains a crucial point in the management of patients with pathologies that predispose to complications. The P-score, derived from static and dynamic parameters, yielded a very high correlation among the scores attributed by the four judges during observations carried out at different times. Bolus consistencies did not affect the outcome of the test: the analysis of variance, performed to verify if the scores attributed by the four judges to the parameters selected, might be influenced by the different consistencies of the boluses, was not

  16. Multilevel processor-sharing algorithm for M/G/1 systems with priorities

    Energy Technology Data Exchange (ETDEWEB)

    Yassouridis, A.; Koller, R.

    1983-01-01

    The well-known multilevel processor-sharing algorithm for M/G/1 systems without priorities is extended to M/G/1 systems with priority classes. The average response time t/sub j/(x) and the average waiting time w/sub j/(x) for a j-class job, which requires a total service of x sec, are analytically calculated. Some figures demonstrate how the priority classes and the total number of different levels affect the behaviour of the functions t/sub j/(x) and w/sub j/(x). In addition, the foreground-background algorithm with priorities, which is not yet covered in the literature, is treated as a special case of the multilevel processor-sharing algorithm. 8 references.

  17. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  18. Algorithms evaluation for fundus images enhancement

    International Nuclear Information System (INIS)

    Braem, V; Marcos, M; Bizai, G; Drozdowicz, B; Salvatelli, A

    2011-01-01

    Color images of the retina inherently involve noise and illumination artifacts. In order to improve the diagnostic quality of the images, it is desirable to homogenize the non-uniform illumination and increase contrast while preserving color characteristics. The visual result of different pre-processing techniques can be very dissimilar and it is necessary to make an objective assessment of the techniques in order to select the most suitable. In this article the performance of eight algorithms to correct the non-uniform illumination, contrast modification and color preservation was evaluated. In order to choose the most suitable a general score was proposed. The results got good impression from experts, although some differences suggest that not necessarily the best statistical quality of image is the one of best diagnostic quality to the trained doctor eye. This means that the best pre-processing algorithm for an automatic classification may be different to the most suitable one for visual diagnosis. However, both should result in the same final diagnosis.

  19. D-score: a search engine independent MD-score.

    Science.gov (United States)

    Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P

    2013-03-01

    While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. ContextD: an algorithm to identify contextual properties of medical terms in a Dutch clinical corpus.

    Science.gov (United States)

    Afzal, Zubair; Pons, Ewoud; Kang, Ning; Sturkenboom, Miriam C J M; Schuemie, Martijn J; Kors, Jan A

    2014-11-29

    In order to extract meaningful information from electronic medical records, such as signs and symptoms, diagnoses, and treatments, it is important to take into account the contextual properties of the identified information: negation, temporality, and experiencer. Most work on automatic identification of these contextual properties has been done on English clinical text. This study presents ContextD, an adaptation of the English ConText algorithm to the Dutch language, and a Dutch clinical corpus. We created a Dutch clinical corpus containing four types of anonymized clinical documents: entries from general practitioners, specialists' letters, radiology reports, and discharge letters. Using a Dutch list of medical terms extracted from the Unified Medical Language System, we identified medical terms in the corpus with exact matching. The identified terms were annotated for negation, temporality, and experiencer properties. To adapt the ConText algorithm, we translated English trigger terms to Dutch and added several general and document specific enhancements, such as negation rules for general practitioners' entries and a regular expression based temporality module. The ContextD algorithm utilized 41 unique triggers to identify the contextual properties in the clinical corpus. For the negation property, the algorithm obtained an F-score from 87% to 93% for the different document types. For the experiencer property, the F-score was 99% to 100%. For the historical and hypothetical values of the temporality property, F-scores ranged from 26% to 54% and from 13% to 44%, respectively. The ContextD showed good performance in identifying negation and experiencer property values across all Dutch clinical document types. Accurate identification of the temporality property proved to be difficult and requires further work. The anonymized and annotated Dutch clinical corpus can serve as a useful resource for further algorithm development.

  1. Sequential Organ Failure Assessment Score for Evaluating Organ Failure and Outcome of Severe Maternal Morbidity in Obstetric Intensive Care

    Directory of Open Access Journals (Sweden)

    Antonio Oliveira-Neto

    2012-01-01

    Full Text Available Objective. To evaluate the performance of Sequential Organ Failure Assessment (SOFA score in cases of severe maternal morbidity (SMM. Design. Retrospective study of diagnostic validation. Setting. An obstetric intensive care unit (ICU in Brazil. Population. 673 women with SMM. Main Outcome Measures. mortality and SOFA score. Methods. Organ failure was evaluated according to maximum score for each one of its six components. The total maximum SOFA score was calculated using the poorest result of each component, reflecting the maximum degree of alteration in systemic organ function. Results. highest total maximum SOFA score was associated with mortality, 12.06 ± 5.47 for women who died and 1.87 ± 2.56 for survivors. There was also a significant correlation between the number of failing organs and maternal mortality, ranging from 0.2% (no failure to 85.7% (≥3 organs. Analysis of the area under the receiver operating characteristic (ROC curve (AUC confirmed the excellent performance of total maximum SOFA score for cases of SMM (AUC = 0.958. Conclusions. Total maximum SOFA score proved to be an effective tool for evaluating severity and estimating prognosis in cases of SMM. Maximum SOFA score may be used to conceptually define and stratify the degree of severity in cases of SMM.

  2. Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor

    Directory of Open Access Journals (Sweden)

    Baofu Fang

    2014-01-01

    Full Text Available Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots’ individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.

  3. Non-alcoholic fatty liver disease - histological scoring systems: a large cohort single-center, evaluation study.

    Science.gov (United States)

    Rastogi, Archana; Shasthry, Saggere Muralikrishna; Agarwal, Ayushi; Bihari, Chhagan; Jain, Priyanka; Jindal, Ankur; Sarin, Shiv

    2017-11-01

    Non-alcoholic fatty liver disease (NAFLD) is an increasingly common cause of chronic liver disease. Till date, liver biopsy remains the gold standard for identification and quantification of the wide histological spectra of NAFLD. Histological scorings are very useful and widely applied for the diagnosis and management in clinical trials and follow-up studies of non-alcoholic steatohepatitis (NASH). However, in view of scarce published literature, there is a need to evaluate them in large cohort of NAFLD. This study was aimed to evaluate the two histological scoring systems (NAS-CRN, SAF) in the diagnosis of NAFLD and to assess the role of histological characteristics as injury markers in NAFLD. Retrospective histological study of liver biopsies of 1000 patients diagnosed as NAFLD, between 2010 and 2016, was conducted. Histopathologic evaluation and semiquantiative scoring based on NAS-CRN and SAF algorithm and their correlation with serum aminotransferase and fibrosis were performed. Liver biopsies were classified according to the NAS-CRN scoring, as NAS <3 (not NASH) in 72 (7.2%), NAS 3-4 (borderline NASH) in 310 (31%), and NAS ≥5 (definite NASH) in 618 (61.8%), and SAF classified 117 (11.7%) not NASH and 883 (88.3%) definite NASH. There was excellent concordance for definite NASH and not NASH; however, 88.06% of borderline NASH was classified as NASH by SAF. 76.39% by NAS and 78.63% by SAF algorithm who were diagnosed as not NASH showed the presence of fibrosis; however, higher stages of fibrosis were significantly more prevalent in definite NASH, excluding burnt-out cirrhosis. Serum ALT was significantly associated with increasing stages of fibrosis (p < 0.001) and the three categories (not NASH, borderline NASH, and definite NASH) when classified as with/without fibrosis (p < 0.001). Steatosis of higher grades, more ballooned cells, and more foci of Lobular Inflammation were found in significantly higher proportion of patients with NASH (p < 0

  4. Shower reconstruction in TUNKA-HiSCORE

    Energy Technology Data Exchange (ETDEWEB)

    Porelli, Andrea; Wischnewski, Ralf [DESY-Zeuthen, Platanenallee 6, 15738 Zeuthen (Germany)

    2015-07-01

    The Tunka-HiSCORE detector is a non-imaging wide-angle EAS cherenkov array designed as an alternative technology for gamma-ray physics above 10 TeV and to study spectrum and composition of cosmic rays above 100 TeV. An engineering array with nine stations (HiS-9) has been deployed in October 2013 on the site of the Tunka experiment in Russia. In November 2014, 20 more HiSCORE stations have been installed, covering a total array area of 0.24 square-km. We describe the detector setup, the role of precision time measurement, and give results from the innovative WhiteRabbit time synchronization technology. Results of air shower reconstruction are presented and compared with MC simulations, for both the HiS-9 and the HiS-29 detector arrays.

  5. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    Science.gov (United States)

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Conclusions Machine learning algorithms can classify open-text feedback

  6. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy.

    Science.gov (United States)

    Gibbons, Chris; Richards, Suzanne; Valderas, Jose Maria; Campbell, John

    2017-03-15

    Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor's activity for the purposes of quality assurance, safety, and continuing professional development. The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors' professional performance in the United Kingdom. We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians' colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to "popular" (recall=.97), "innovator" (recall=.98), and "respected" (recall=.87) codes and was lower for the "interpersonal" (recall=.80) and "professional" (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as "respected," "professional," and "interpersonal" related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high

  7. Validation of the 12-gene colon cancer recurrence score as a predictor of recurrence risk in stage II and III rectal cancer patients.

    Science.gov (United States)

    Reimers, Marlies S; Kuppen, Peter J K; Lee, Mark; Lopatin, Margarita; Tezcan, Haluk; Putter, Hein; Clark-Langone, Kim; Liefers, Gerrit Jan; Shak, Steve; van de Velde, Cornelis J H

    2014-11-01

    The 12-gene Recurrence Score assay is a validated predictor of recurrence risk in stage II and III colon cancer patients. We conducted a prospectively designed study to validate this assay for prediction of recurrence risk in stage II and III rectal cancer patients from the Dutch Total Mesorectal Excision (TME) trial. RNA was extracted from fixed paraffin-embedded primary rectal tumor tissue from stage II and III patients randomized to TME surgery alone, without (neo)adjuvant treatment. Recurrence Score was assessed by quantitative real time-polymerase chain reaction using previously validated colon cancer genes and algorithm. Data were analysed by Cox proportional hazards regression, adjusting for stage and resection margin status. All statistical tests were two-sided. Recurrence Score predicted risk of recurrence (hazard ratio [HR] = 1.57, 95% confidence interval [CI] = 1.11 to 2.21, P = .01), risk of distant recurrence (HR = 1.50, 95% CI = 1.04 to 2.17, P = .03), and rectal cancer-specific survival (HR = 1.64, 95% CI = 1.15 to 2.34, P = .007). The effect of Recurrence Score was most prominent in stage II patients and attenuated with more advanced stage (P(interaction) ≤ .007 for each endpoint). In stage II, five-year cumulative incidence of recurrence ranged from 11.1% in the predefined low Recurrence Score group (48.5% of patients) to 43.3% in the high Recurrence Score group (23.1% of patients). The 12-gene Recurrence Score is a predictor of recurrence risk and cancer-specific survival in rectal cancer patients treated with surgery alone, suggesting a similar underlying biology in colon and rectal cancers. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Isothermal Gravitational Segregation: Algorithms and Specifications

    DEFF Research Database (Denmark)

    Halldórsson, Snorri; Stenby, Erling Halfdan

    2000-01-01

    New algorithms for calculating the isothermal equilibrium state of reservoir fluids under the influence of gravity are presented. Two types of specifications are considered: the specification of pressure and composition at a reference depth; and the specification of the total overall content of t...

  9. Algorithm for automatic analysis of electro-oculographic data.

    Science.gov (United States)

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  10. Economic Load Dispatch Unit Pembangkit Termal Mempertimbangkan Penambahan Pembangkit Tenaga Angin Dengan Menggunakan Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Ridho Syahrial Ibrahim

    2017-03-01

    Full Text Available Maraknya isu global warming serta keterbatasan sumber daya alam membuat mulai banyaknya dibangun pembangkit-pembangkit listrik dengan renewable energy, salah satunya adalah pembangkit tenaga angin. Pada jurnal ini, firefly algorithm diterapkan untuk mengoptimasi total biaya pembangkitan 2 buah sistem uji, tanpa dan dengan mempertimbangkan penambahan tenaga angin. Hasil simulasi menunjukkan bahwa dengan penambahan pembangkit tenaga angin ke dalam sistem tenaga listrik, total biaya pembangkitan tidak selalu lebih murah. Selain itu, hasil simulasi juga menunjukkan bahwa firefly algorithm sebagai metode optimasi dapat menyelesaikan permasalahan economic load dispatch (ELD lebih baik dibandingkan metode lain yang sudah dilakukan, yaitu particle swarm optimization (PSO, bat algorithm (BA, biogeography-based optimization (BBO dan plant growth simulation algorithm (PGSA dengan persentase selisih nilai penghematan total biaya berkisar antara 0.32% ($50 hingga 9.27% ($11884.

  11. Ganga hospital open injury score in management of open injuries.

    Science.gov (United States)

    Rajasekaran, S; Sabapathy, S R; Dheenadhayalan, J; Sundararajan, S R; Venkatramani, H; Devendra, A; Ramesh, P; Srikanth, K P

    2015-02-01

    Open injuries of the limbs offer challenges in management as there are still many grey zones in decision making regarding salvage, timing and type of reconstruction. As a result, there is still an unacceptable rate of secondary amputations which lead to tremendous waste of resources and psychological devastation of the patient and his family. Gustilo Anderson's classification was a major milestone in grading the severity of injury but however suffers from the disadvantages of imprecise definition, a poor interobserver correlation, inability to address the issue of salvage and inclusion of a wide spectrum of injuries in Type IIIb category. Numerous scores such as Mangled Extremity Severity Score, the Predictive Salvage Index, the Limb Salvage Index, Hannover Fracture Scale-97 etc have been proposed but all have the disadvantage of retrospective evaluation, inadequate sample sizes and poor sensitivity and specificity to amputation, especially in IIIb injuries. The Ganga Hospital Open Injury Score (GHOIS) was proposed in 2004 and is designed to specifically address the outcome in IIIb injuries of the tibia without vascular deficit. It evaluates the severity of injury to the three components of the limb--the skin, the bone and the musculotendinous structures separately on a grade from 0 to 5. Seven comorbid factors which influence the treatment and the outcome are included in the score with two marks each. The application of the total score and the individual tissue scores in management of IIIB injuries is discussed. The total score was shown to predict salvage when the value was 14 or less; amputation when the score was 17 and more. A grey zone of 15 and 16 is provided where the decision making had to be made on a case to case basis. The additional value of GHOIS was its ability to guide the timing and type of reconstruction. A skin score of more than 3 always required a flap and hence it indicated the need for an orthoplastic approach from the index procedure. Bone

  12. SHARPEN-Systematic Hierarchical Algorithms for Rotamers and Proteins on an Extended Network

    KAUST Repository

    Loksha, Ilya V.

    2009-04-30

    Algorithms for discrete optimization of proteins play a central role in recent advances in protein structure prediction and design. We wish to improve the resources available for computational biologists to rapidly prototype such algorithms and to easily scale these algorithms to many processors. To that end, we describe the implementation and use of two new open source resources, citing potential benefits over existing software. We discuss CHOMP, a new object-oriented library for macromolecular optimization, and SHARPEN, a framework for scaling CHOMP scripts to many computers. These tools allow users to develop new algorithms for a variety of applications including protein repacking, protein-protein docking, loop rebuilding, or homology model remediation. Particular care was taken to allow modular energy function design; protein conformations may currently be scored using either the OPLSaa molecular mechanical energy function or an all-atom semiempirical energy function employed by Rosetta. © 2009 Wiley Periodicals, Inc.

  13. Interpreting Quality of Life after Brain Injury Scores: Cross-Walk with the Short Form-36.

    Science.gov (United States)

    Wilson, Lindsay; Marsden-Loftus, Isaac; Koskinen, Sanna; Bakx, Wilbert; Bullinger, Monika; Formisano, Rita; Maas, Andrew; Neugebauer, Edmund; Powell, Jane; Sarajuuri, Jaana; Sasse, Nadine; von Steinbuechel, Nicole; von Wild, Klaus; Truelle, Jean-Luc

    2017-01-01

    The Quality of Life after Brain Injury (QOLIBRI) instruments are traumatic brain injury (TBI)-specific assessments of health-related quality of life (HRQoL), with established validity and reliability. The purpose of the study is to help improve the interpretability of the two QOLIBRI summary scores (the QOLIBRI Total score and the QOLBRI Overall Scale [OS] score). An analysis was conducted of 761 patients with TBI who took part in the QOLIBRI validation studies. A cross-walk between QOLIBRI scores and the SF-36 Mental Component Summary norm-based scoring system was performed using geometric mean regression analysis. The exercise supports a previous suggestion that QOLIBRI Total scores GOSE), as a measure of global function, are presented in the form of means and standard deviations that allow comparison with other studies, and data on age and sex are presented for the QOLIBRI-OS. While bearing in mind the potential imprecision of the comparison, the findings provide a framework for evaluating QOLIBRI summary scores in relation to generic HRQoL that improves their interpretability.

  14. Extension of the lod score: the mod score.

    Science.gov (United States)

    Clerget-Darpoux, F

    2001-01-01

    In 1955 Morton proposed the lod score method both for testing linkage between loci and for estimating the recombination fraction between them. If a disease is controlled by a gene at one of these loci, the lod score computation requires the prior specification of an underlying model that assigns the probabilities of genotypes from the observed phenotypes. To address the case of linkage studies for diseases with unknown mode of inheritance, we suggested (Clerget-Darpoux et al., 1986) extending the lod score function to a so-called mod score function. In this function, the variables are both the recombination fraction and the disease model parameters. Maximizing the mod score function over all these parameters amounts to maximizing the probability of marker data conditional on the disease status. Under the absence of linkage, the mod score conforms to a chi-square distribution, with extra degrees of freedom in comparison to the lod score function (MacLean et al., 1993). The mod score is asymptotically maximum for the true disease model (Clerget-Darpoux and Bonaïti-Pellié, 1992; Hodge and Elston, 1994). Consequently, the power to detect linkage through mod score will be highest when the space of models where the maximization is performed includes the true model. On the other hand, one must avoid overparametrization of the model space. For example, when the approach is applied to affected sibpairs, only two constrained disease model parameters should be used (Knapp et al., 1994) for the mod score maximization. It is also important to emphasize the existence of a strong correlation between the disease gene location and the disease model. Consequently, there is poor resolution of the location of the susceptibility locus when the disease model at this locus is unknown. Of course, this is true regardless of the statistics used. The mod score may also be applied in a candidate gene strategy to model the potential effect of this gene in the disease. Since, however, it

  15. Despeckling Polsar Images Based on Relative Total Variation Model

    Science.gov (United States)

    Jiang, C.; He, X. F.; Yang, L. J.; Jiang, J.; Wang, D. Y.; Yuan, Y.

    2018-04-01

    Relatively total variation (RTV) algorithm, which can effectively decompose structure information and texture in image, is employed in extracting main structures of the image. However, applying the RTV directly to polarimetric SAR (PolSAR) image filtering will not preserve polarimetric information. A new RTV approach based on the complex Wishart distribution is proposed considering the polarimetric properties of PolSAR. The proposed polarization RTV (PolRTV) algorithm can be used for PolSAR image filtering. The L-band Airborne SAR (AIRSAR) San Francisco data is used to demonstrate the effectiveness of the proposed algorithm in speckle suppression, structural information preservation, and polarimetric property preservation.

  16. Image noise reduction algorithm for digital subtraction angiography: clinical results.

    Science.gov (United States)

    Söderman, Michael; Holmin, Staffan; Andersson, Tommy; Palmgren, Charlotta; Babic, Draženko; Hoornaert, Bart

    2013-11-01

    To test the hypothesis that an image noise reduction algorithm designed for digital subtraction angiography (DSA) in interventional neuroradiology enables a reduction in the patient entrance dose by a factor of 4 while maintaining image quality. This clinical prospective study was approved by the local ethics committee, and all 20 adult patients provided informed consent. DSA was performed with the default reference DSA program, a quarter-dose DSA program with modified acquisition parameters (to reduce patient radiation dose exposure), and a real-time noise-reduction algorithm. Two consecutive biplane DSA data sets were acquired in each patient. The dose-area product (DAP) was calculated for each image and compared. A randomized, blinded, offline reading study was conducted to show noninferiority of the quarter-dose image sets. Overall, 40 samples per treatment group were necessary to acquire 80% power, which was calculated by using a one-sided α level of 2.5%. The mean DAP with the quarter-dose program was 25.3% ± 0.8 of that with the reference program. The median overall image quality scores with the reference program were 9, 13, and 12 for readers 1, 2, and 3, respectively. These scores increased slightly to 12, 15, and 12, respectively, with the quarter-dose program imaging chain. In DSA, a change in technique factors combined with a real-time noise-reduction algorithm will reduce the patient entrance dose by 75%, without a loss of image quality. RSNA, 2013

  17. Distinguishing benign from malignant pelvic mass utilizing an algorithm with HE4, menopausal status, and ultrasound findings

    Science.gov (United States)

    Chan, Karen KL; Chen, Chi-An; Nam, Joo-Hyun; Ochiai, Kazunori; Aw, Tar-Choon; Sabaratnam, Subathra; Hebbar, Sudarshan; Sickan, Jaganathan; Schodin, Beth A; Charakorn, Chuenkamon; Sumpaico, Walfrido W

    2015-01-01

    Objective The purpose of this study was to develop a risk prediction score for distinguishing benign ovarian mass from malignant tumors using CA-125, human epididymis protein 4 (HE4), ultrasound findings, and menopausal status. The risk prediction score was compared to the risk of malignancy index and risk of ovarian malignancy algorithm (ROMA). Methods This was a prospective, multicenter (n=6) study with patients from six Asian countries. Patients had a pelvic mass upon imaging and were scheduled to undergo surgery. Serum CA-125 and HE4 were measured on preoperative samples, and ultrasound findings were recorded. Regression analysis was performed and a risk prediction model was developed based on the significant factors. A bootstrap technique was applied to assess the validity of the HE4 model. Results A total of 414 women with a pelvic mass were enrolled in the study, of which 328 had documented ultrasound findings. The risk prediction model that contained HE4, menopausal status, and ultrasound findings exhibited the best performance compared to models with CA-125 alone, or a combination of CA-125 and HE4. This model classified 77.2% of women with ovarian cancer as medium or high risk, and 86% of women with benign disease as very-low, low, or medium-low risk. This model exhibited better sensitivity than ROMA, but ROMA exhibited better specificity. Both models performed better than CA-125 alone. Conclusion Combining ultrasound with HE4 can improve the sensitivity for detecting ovarian cancer compared to other algorithms. PMID:25310857

  18. Personalized Risk Scoring for Critical Care Prognosis Using Mixtures of Gaussian Processes.

    Science.gov (United States)

    Alaa, Ahmed M; Yoon, Jinsung; Hu, Scott; van der Schaar, Mihaela

    2018-01-01

    In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients' heterogeneity. The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.

  19. Multi-AGV path planning with double-path constraints by using an improved genetic algorithm.

    Directory of Open Access Journals (Sweden)

    Zengliang Han

    Full Text Available This paper investigates an improved genetic algorithm on multiple automated guided vehicle (multi-AGV path planning. The innovations embody in two aspects. First, three-exchange crossover heuristic operators are used to produce more optimal offsprings for getting more information than with the traditional two-exchange crossover heuristic operators in the improved genetic algorithm. Second, double-path constraints of both minimizing the total path distance of all AGVs and minimizing single path distances of each AGV are exerted, gaining the optimal shortest total path distance. The simulation results show that the total path distance of all AGVs and the longest single AGV path distance are shortened by using the improved genetic algorithm.

  20. A Hybrid Genetic Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Sydulu Maheswarapu

    2011-08-01

    Full Text Available This paper puts forward a reformed hybrid genetic algorithm (GA based approach to the optimal power flow. In the approach followed here, continuous variables are designed using real-coded GA and discrete variables are processed as binary strings. The outcomes are compared with many other methods like simple genetic algorithm (GA, adaptive genetic algorithm (AGA, differential evolution (DE, particle swarm optimization (PSO and music based harmony search (MBHS on a IEEE30 bus test bed, with a total load of 283.4 MW. Its found that the proposed algorithm is found to offer lowest fuel cost. The proposed method is found to be computationally faster, robust, superior and promising form its convergence characteristics.

  1. An Algorithmic Approach to Total Breast Reconstruction with Free Tissue Transfer

    Directory of Open Access Journals (Sweden)

    Seong Cheol Yu

    2013-05-01

    Full Text Available As microvascular techniques continue to improve, perforator flap free tissue transfer is now the gold standard for autologous breast reconstruction. Various options are available for breast reconstruction with autologous tissue. These include the free transverse rectus abdominis myocutaneous (TRAM flap, deep inferior epigastric perforator flap, superficial inferior epigastric artery flap, superior gluteal artery perforator flap, and transverse/vertical upper gracilis flap. In addition, pedicled flaps can be very successful in the right hands and the right patient, such as the pedicled TRAM flap, latissimus dorsi flap, and thoracodorsal artery perforator. Each flap comes with its own advantages and disadvantages related to tissue properties and donor-site morbidity. Currently, the problem is how to determine the most appropriate flap for a particular patient among those potential candidates. Based on a thorough review of the literature and accumulated experiences in the author’s institution, this article provides a logical approach to autologous breast reconstruction. The algorithms presented here can be helpful to customize breast reconstruction to individual patient needs.

  2. Total least squares for anomalous change detection

    Science.gov (United States)

    Theiler, James; Matsekh, Anna M.

    2010-04-01

    A family of subtraction-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQbased anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and special cases of it are equivalent to canonical correlation analysis and optimized covariance equalization. What whitened TLSQ offers is a generalization of these algorithms with the potential for better performance.

  3. Validation study of the Forgotten Joint Score-12 as a universal patient-reported outcome measure.

    Science.gov (United States)

    Matsumoto, Mikio; Baba, Tomonori; Homma, Yasuhiro; Kobayashi, Hideo; Ochi, Hironori; Yuasa, Takahito; Behrend, Henrik; Kaneko, Kazuo

    2015-10-01

    The Forgotten Joint Score-12 (FJS-12) is for patients to forget their artificial joint and is reportedly a useful patient-reported outcome tool for artificial joints. The purpose of this study was to determine whether the FJS-12 is as useful as the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) or the Japanese Orthopaedic Association Hip Disease Evaluation Questionnaire (JHEQ) in Japan. All patients who visited our hospital's hip joint specialists following unilateral THA from August 2013 to July 2014 were evaluated. Medical staff members other than physicians administered three questionnaires. Items evaluated were (1) the reliability of the FJS-12 and (2) correlations between the FJS-12 and the total and subscale scores of the WOMAC or JHEQ. Of 130 patients, 22 were excluded. Cronbach's α coefficient was 0.97 for the FJS-12. The FJS-12 showed a significantly lower score than the WOMAC or JHEQ (p < 0.01). The FJS-12 was moderately correlated with the total WOMAC score (r = 0.522) and its subscale scores for "stiffness" (r = 0.401) and "function" (r = 0.539) and was weakly correlated with the score for "pain" (r = 0.289). The FJS-12 was favorably correlated with the total JHEQ score (r = 0.686) and its subscale scores (r = 0.530-0.643). The FJS-12 was correlated with and showed reliability similar to that of the JHEQ and WOMAC. The FJS-12, which is not affected by culture or lifestyle, may be useful in Japan.

  4. Electron dose map inversion based on several algorithms

    International Nuclear Information System (INIS)

    Li Gui; Zheng Huaqing; Wu Yican; Fds Team

    2010-01-01

    The reconstruction to the electron dose map in radiation therapy was investigated by constructing the inversion model of electron dose map with different algorithms. The inversion model of electron dose map based on nonlinear programming was used, and this model was applied the penetration dose map to invert the total space one. The realization of this inversion model was by several inversion algorithms. The test results with seven samples show that except the NMinimize algorithm, which worked for just one sample, with great error,though,all the inversion algorithms could be realized to our inversion model rapidly and accurately. The Levenberg-Marquardt algorithm, having the greatest accuracy and speed, could be considered as the first choice in electron dose map inversion.Further tests show that more error would be created when the data close to the electron range was used (tail error). The tail error might be caused by the approximation of mean energy spectra, and this should be considered to improve the method. The time-saving and accurate algorithms could be used to achieve real-time dose map inversion. By selecting the best inversion algorithm, the clinical need in real-time dose verification can be satisfied. (authors)

  5. CiSE: a circular spring embedder layout algorithm.

    Science.gov (United States)

    Dogrusoz, Ugur; Belviranli, Mehmet E; Dilek, Alptug

    2013-06-01

    We present a new algorithm for automatic layout of clustered graphs using a circular style. The algorithm tries to determine optimal location and orientation of individual clusters intrinsically within a modified spring embedder. Heuristics such as reversal of the order of nodes in a cluster and swap of neighboring node pairs in the same cluster are employed intermittently to further relax the spring embedder system, resulting in reduced inter-cluster edge crossings. Unlike other algorithms generating circular drawings, our algorithm does not require the quotient graph to be acyclic, nor does it sacrifice the edge crossing number of individual clusters to improve respective positioning of the clusters. Moreover, it reduces the total area required by a cluster by using the space inside the associated circle. Experimental results show that the execution time and quality of the produced drawings with respect to commonly accepted layout criteria are quite satisfactory, surpassing previous algorithms. The algorithm has also been successfully implemented and made publicly available as part of a compound and clustered graph editing and layout tool named CHISIO.

  6. Foam: A general purpose Monte Carlo cellular algorithm

    International Nuclear Information System (INIS)

    Jadach, S.

    2003-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program Foam is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be n-dimensional simplices, hyperrectangles cells. The next cell to be divided and the position/direction of the division hyperplane is chosen by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution

  7. A New Feedback-Analysis based Reputation Algorithm for E-Commerce Communities

    Directory of Open Access Journals (Sweden)

    Hasnae Rahimi

    2014-12-01

    Full Text Available Dealing with the ever-growing content generated by users in the e-commerce applications, Trust Reputation Systems (TRS are widely used online to provide the trust reputation of each product using the customers’ ratings. However, there is also a good number of online customer reviews and feedback that must be used by the TRS. As a result, we propose in this work a new architecture for TRS in e-commerce application which includes feedback’ mining in order to calculate reputation scores. This architecture is based on an intelligent layer that proposes to each user (i.e. “feedback provider” who has already given his recommendation, a collection of prefabricated feedback to like or dislike. Then the proposed reputation algorithm calculates the trust degree of the user, the feedback’s trustworthiness and generates the global reputation score of the product according to his ‘likes’ and ‘dislikes’. In this work, we present also a state of the art of text mining tools and algorithms that can be used to generate the prefabricated feedback and to classify them into different categories.

  8. PeptX: Using Genetic Algorithms to optimize peptides for MHC binding

    Directory of Open Access Journals (Sweden)

    Ribarics Reiner

    2011-06-01

    Full Text Available Abstract Background The binding between the major histocompatibility complex and the presented peptide is an indispensable prerequisite for the adaptive immune response. There is a plethora of different in silico techniques for the prediction of the peptide binding affinity to major histocompatibility complexes. Most studies screen a set of peptides for promising candidates to predict possible T cell epitopes. In this study we ask the question vice versa: Which peptides do have highest binding affinities to a given major histocompatibility complex according to certain in silico scoring functions? Results Since a full screening of all possible peptides is not feasible in reasonable runtime, we introduce a heuristic approach. We developed a framework for Genetic Algorithms to optimize peptides for the binding to major histocompatibility complexes. In an extensive benchmark we tested various operator combinations. We found that (1 selection operators have a strong influence on the convergence of the population while recombination operators have minor influence and (2 that five different binding prediction methods lead to five different sets of "optimal" peptides for the same major histocompatibility complex. The consensus peptides were experimentally verified as high affinity binders. Conclusion We provide a generalized framework to calculate sets of high affinity binders based on different previously published scoring functions in reasonable runtime. Furthermore we give insight into the different behaviours of operators and scoring functions of the Genetic Algorithm.

  9. Atrial Fibrillation Screening in Nonmetropolitan Areas Using a Telehealth Surveillance System With an Embedded Cloud-Computing Algorithm: Prospective Pilot Study

    Science.gov (United States)

    Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien

    2017-01-01

    Background Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. Objective The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. Methods We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Results Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician’s ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. Conclusions AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. PMID:28951384

  10. Bad Clade Deletion Supertrees: A Fast and Accurate Supertree Algorithm.

    Science.gov (United States)

    Fleischauer, Markus; Böcker, Sebastian

    2017-09-01

    Supertree methods merge a set of overlapping phylogenetic trees into a supertree containing all taxa of the input trees. The challenge in supertree reconstruction is the way of dealing with conflicting information in the input trees. Many different algorithms for different objective functions have been suggested to resolve these conflicts. In particular, there exist methods based on encoding the source trees in a matrix, where the supertree is constructed applying a local search heuristic to optimize the respective objective function. We present a novel heuristic supertree algorithm called Bad Clade Deletion (BCD) supertrees. It uses minimum cuts to delete a locally minimal number of columns from such a matrix representation so that it is compatible. This is the complement problem to Matrix Representation with Compatibility (Maximum Split Fit). Our algorithm has guaranteed polynomial worst-case running time and performs swiftly in practice. Different from local search heuristics, it guarantees to return the directed perfect phylogeny for the input matrix, corresponding to the parent tree of the input trees, if one exists. Comparing supertrees to model trees for simulated data, BCD shows a better accuracy (F1 score) than the state-of-the-art algorithms SuperFine (up to 3%) and Matrix Representation with Parsimony (up to 7%); at the same time, BCD is up to 7 times faster than SuperFine, and up to 600 times faster than Matrix Representation with Parsimony. Finally, using the BCD supertree as a starting tree for a combined Maximum Likelihood analysis using RAxML, we reach significantly improved accuracy (1% higher F1 score) and running time (1.7-fold speedup). © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. Behavioral rules of bank’s point-of-sale for segments description and scoring prediction

    Directory of Open Access Journals (Sweden)

    Mehdi Bizhani

    2011-04-01

    Full Text Available One of the important factors for the success of a bank industry is to monitor their customers' behavior and their point-of-sale (POS. The bank needs to know its merchants' behavior to find interesting ones to attract more transactions which results in the growth of its income and assets. The recency, frequency and monetary (RFM analysis is a famous approach for extracting behavior of customers and is a basis for marketing and customer relationship management (CRM, but it is not aligned enough for banking context. Introducing RF*M* in this article results in a better understanding of groups of merchants. Another artifact of RF*M* is RF*M* scoring which is applied in two ways, preprocessing the POSs and assigning behavioral meaningful labels to the merchants’ segments. The class labels and the RF*M* parameters are entered into a rule-based classification algorithm to achieve descriptive rules of the clusters. These descriptive rules outlined the boundaries of RF*M* parameters for each cluster. Since the rules are generated by a classification algorithm, they can also be applied for predicting the behavioral label and scoring of the upcoming POSs. These rules are called behavioral rules.

  12. Inter-device reliability of an automatic-scoring actigraph for measuring sleep in healthy adults

    Directory of Open Access Journals (Sweden)

    Matthew Driller

    2016-07-01

    Full Text Available Actigraphy has become a common method of measuring sleep due to its non-invasive, cost-effective nature. An actigraph (Readiband™ that utilizes automatic scoring algorithms has been used in the research, but is yet to be evaluated for its inter-device reliability. A total of 77 nights of sleep data from 11 healthy adult participants was collected while participants were concomitantly wearing two Readiband™ actigraphs attached together (ACT1 and ACT2. Sleep indices including total sleep time (TST, sleep latency (SL, sleep efficiency (SE%, wake after sleep onset (WASO, total time in bed (TTB, wake episodes per night (WE, sleep onset variance (SOV and wake variance (WV were assessed between the two devices using mean differences, 95% levels of agreement, intraclass correlation coefficients (ICC, typical error of measurement (TEM and coefficient of variation (CV% analysis. There were no significant differences between devices for any of the measured sleep variables (p>0.05. TST, SE, SL, TTB, SOV and WV all resulted in very high ICC's (>0.90, with WASO and WE resulting in high ICC's between devices (0.85 and 0.80, respectively. Mean differences of −2.1 and 0.2 min for TST and SL were associated with a low TEM between devices (9.5 and 3.8 min, respectively. SE resulted in a 0.3% mean difference between devices. The Readiband™ is a reliable tool for researchers using multiple devices of this brand in sleep studies to assess basic measures of sleep quality and quantity in healthy adult populations.

  13. A new hybrid metaheuristic algorithm for wind farm micrositing

    International Nuclear Information System (INIS)

    Massan, S.U.R.; Wagan, A.I.; Shaikh, M.M.

    2017-01-01

    This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm) for the solution of the WTO (Wind Turbine Optimization) problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm) and the FA (Firefly Algorithm). The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm) used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together. (author)

  14. A New Hybrid Metaheuristic Algorithm for Wind Farm Micrositing

    Directory of Open Access Journals (Sweden)

    SHAFIQ-UR-REHMAN MASSAN

    2017-07-01

    Full Text Available This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm for the solution of the WTO (Wind Turbine Optimization problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm and the FA (Firefly Algorithm. The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together.

  15. The predictive value of an adjusted COPD assessment test score on the risk of respiratory-related hospitalizations in severe COPD patients.

    Science.gov (United States)

    Barton, Christopher A; Bassett, Katherine L; Buckman, Julie; Effing, Tanja W; Frith, Peter A; van der Palen, Job; Sloots, Joanne M

    2017-02-01

    We evaluated whether a chronic obstructive pulmonary disease (COPD) assessment test (CAT) with adjusted weights for the CAT items could better predict future respiratory-related hospitalizations than the original CAT. Two focus groups (respiratory nurses and physicians) generated two adjusted CAT algorithms. Two multivariate logistic regression models for infrequent (≤1/year) versus frequent (>1/year) future respiratory-related hospitalizations were defined: one with the adjusted CAT score that correlated best with future hospitalizations and one with the original CAT score. Patient characteristics related to future hospitalizations ( p ≤ 0.2) were also entered. Eighty-two COPD patients were included. The CAT algorithm derived from the nurse focus group was a borderline significant predictor of hospitalization risk (odds ratio (OR): 1.07; 95% confidence interval (CI): 1.00-1.14; p = 0.050) in a model that also included hospitalization frequency in the previous year (OR: 3.98; 95% CI: 1.30-12.16; p = 0.016) and anticholinergic risk score (OR: 3.08; 95% CI: 0.87-10.89; p = 0.081). Presence of ischemic heart disease and/or heart failure appeared 'protective' (OR: 0.17; 95% CI: 0.05-0.62; p = 0.007). The original CAT score was not significantly associated with hospitalization risk. In conclusion, as a predictor of respiratory-related hospitalizations, an adjusted CAT score was marginally significant (although the original CAT score was not). 'Previous respiratory-related hospitalizations' was the strongest factor in this equation.

  16. Nutrición enteral total vs. nutrición parenteral total en pacientes con pancreatitis aguda grave Total enteral nutrition vs. total parenteral nutrition in patients with severe acute pancreatitis

    Directory of Open Access Journals (Sweden)

    M. Casas

    2007-05-01

    Full Text Available Objetivo: comparar la eficacia de la instauración precoz de nutrición enteral total (NET frente a nutrición parenteral total (NPT en pacientes con pancreatitis aguda grave (PAG. Métodos: estudio prospectivo aleatorio. Se incluyeron consecutivamente 22 pacientes con PAG aplicando los criterios APACHE II, valores de PCR y graduación de Balthazar en la TC. El grupo I (n = 11 recibió NPT y el grupo II (n = 12 NET. Se valoró la respuesta inflamatoria (PCR, TNF-alfa, IL-6, las proteínas viscerales (pre-albúmina, albúmina, la tasa de complicaciones (síndrome de respuesta inflamatoria sistémica, fallo multiorgánico, infecciones, las intervenciones quirúrgicas, la estancia hospitalaria y la mortalidad. Resultados: no hubo diferencias significativas en los primeros 10 días entre los dos grupos en la evolución de los criterios APACHE II, en las concentraciones de PCR, TNF-alfa e IL-6 ni tampoco en los valores de pre-albúmina y albúmina. Siete pacientes del grupo I presentaron complicaciones graves frente a 4 del grupo II. Requirieron intervención quirúrgica 3 pacientes del grupo I. La estancia hospitalaria fue similar en los dos grupos. Dos pacientes del grupo I fallecieron. Conclusiones: se ha observado una tendencia a una mejor evolución de los pacientes con PAG que utilizaron NET frente a los que utilizaron NPT.Objective: to compare the efficacy of early total enteral nutrition (TEN vs. total parenteral nutrition (TPN in patients with severe acute pancreatitis (SAP. Methods: a total of 22 consecutive patients with SAP were randomized to receive TPN (group I or TEN (group II. SAP was defined applying APACHE II score, C-reactive protein (CRP measurements and/or Balthazar CT scan score. Acute inflammatory response (CRP, TNF-alpha, IL-6, visceral proteins (pre-albumin, albumin, complications (systemic inflammatory response syndrome, multiorgan failure, infections, surgical interventions, length of hospital stay and mortality were

  17. Optimal solution for travelling salesman problem using heuristic shortest path algorithm with imprecise arc length

    Science.gov (United States)

    Bakar, Sumarni Abu; Ibrahim, Milbah

    2017-08-01

    The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.

  18. An Efficient Meta Heuristic Algorithm to Solve Economic Load Dispatch Problems

    Directory of Open Access Journals (Sweden)

    R Subramanian

    2013-12-01

    Full Text Available The Economic Load Dispatch (ELD problems in power generation systems are to reduce the fuel cost by reducing the total cost for the generation of electric power. This paper presents an efficient Modified Firefly Algorithm (MFA, for solving ELD Problem. The main objective of the problems is to minimize the total fuel cost of the generating units having quadratic cost functions subjected to limits on generator true power output and transmission losses. The MFA is a stochastic, Meta heuristic approach based on the idealized behaviour of the flashing characteristics of fireflies. This paper presents an application of MFA to ELD for six generator test case system. MFA is applied to ELD problem and compared its solution quality and computation efficiency to Genetic algorithm (GA, Differential Evolution (DE, Particle swarm optimization (PSO, Artificial Bee Colony optimization (ABC, Biogeography-Based Optimization (BBO, Bacterial Foraging optimization (BFO, Firefly Algorithm (FA techniques. The simulation result shows that the proposed algorithm outperforms previous optimization methods.

  19. A fast iterative soft-thresholding algorithm for few-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng; Mou, Xuanqin; Zhang, Yanbo [Jiaotong Univ., Xi' an (China). Inst. of Image Processing and Pattern Recognition

    2011-07-01

    Iterative soft-thresholding algorithms with total variation regularization can produce high-quality reconstructions from few views and even in the presence of noise. However, these algorithms are known to converge quite slowly, with a proven theoretically global convergence rate O(1/k), where k is iteration number. In this paper, we present a fast iterative soft-thresholding algorithm for few-view fan beam CT reconstruction with a global convergence rate O(1/k{sup 2}), which is significantly faster than the iterative soft-thresholding algorithm. Simulation results demonstrate the superior performance of the proposed algorithm in terms of convergence speed and reconstruction quality. (orig.)

  20. Clinical effectiveness of a Bayesian algorithm for the diagnosis and management of heparin-induced thrombocytopenia.

    Science.gov (United States)

    Raschke, R A; Gallo, T; Curry, S C; Whiting, T; Padilla-Jones, A; Warkentin, T E; Puri, A

    2017-08-01

    Essentials We previously published a diagnostic algorithm for heparin-induced thrombocytopenia (HIT). In this study, we validated the algorithm in an independent large healthcare system. The accuracy was 98%, sensitivity 82% and specificity 99%. The algorithm has potential to improve accuracy and efficiency in the diagnosis of HIT. Background Heparin-induced thrombocytopenia (HIT) is a life-threatening drug reaction caused by antiplatelet factor 4/heparin (anti-PF4/H) antibodies. Commercial tests to detect these antibodies have suboptimal operating characteristics. We previously developed a diagnostic algorithm for HIT that incorporated 'four Ts' (4Ts) scoring and a stratified interpretation of an anti-PF4/H enzyme-linked immunosorbent assay (ELISA) and yielded a discriminant accuracy of 0.97 (95% confidence interval [CI], 0.93-1.00). Objectives The purpose of this study was to validate the algorithm in an independent patient population and quantitate effects that algorithm adherence could have on clinical care. Methods A retrospective cohort comprised patients who had undergone anti-PF4/H ELISA and serotonin release assay (SRA) testing in our healthcare system from 2010 to 2014. We determined the algorithm recommendation for each patient, compared recommendations with the clinical care received, and enumerated consequences of discrepancies. Operating characteristics were calculated for algorithm recommendations using SRA as the reference standard. Results Analysis was performed on 181 patients, 10 of whom were ruled in for HIT. The algorithm accurately stratified 98% of patients (95% CI, 95-99%), ruling out HIT in 158, ruling in HIT in 10 and recommending an SRA in 13 patients. Algorithm adherence would have obviated 165 SRAs and prevented 30 courses of unnecessary antithrombotic therapy for HIT. Diagnostic sensitivity was 0.82 (95% CI, 0.48-0.98), specificity 0.99 (95% CI, 0.97-1.00), PPV 0.90 (95% CI, 0.56-0.99) and NPV 0.99 (95% CI, 0.96-1.00). Conclusions An

  1. Unraveling cognitive traits using the Morris water maze unbiased strategy classification (MUST-C) algorithm.

    Science.gov (United States)

    Illouz, Tomer; Madar, Ravit; Louzon, Yoram; Griffioen, Kathleen J; Okun, Eitan

    2016-02-01

    The assessment of spatial cognitive learning in rodents is a central approach in neuroscience, as it enables one to assess and quantify the effects of treatments and genetic manipulations from a broad perspective. Although the Morris water maze (MWM) is a well-validated paradigm for testing spatial learning abilities, manual categorization of performance in the MWM into behavioral strategies is subject to individual interpretation, and thus to biases. Here we offer a support vector machine (SVM) - based, automated, MWM unbiased strategy classification (MUST-C) algorithm, as well as a cognitive score scale. This model was examined and validated by analyzing data obtained from five MWM experiments with changing platform sizes, revealing a limitation in the spatial capacity of the hippocampus. We have further employed this algorithm to extract novel mechanistic insights on the impact of members of the Toll-like receptor pathway on cognitive spatial learning and memory. The MUST-C algorithm can greatly benefit MWM users as it provides a standardized method of strategy classification as well as a cognitive scoring scale, which cannot be derived from typical analysis of MWM data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Association of serum total antioxidant capacity and total oxidant status with pain perception in patients with myofacial pain dysfunction.

    Science.gov (United States)

    Etoz, Osman A; Ataoglu, Hanife; Erel, Ozcan; Celik, Hakim; Herken, Emine Nur; Bayazit, Yildirim Ahmet

    2009-01-01

    We aimed to find out the association of total antioxidant capacity (TAC) and total oxidant status (TOS) with generalized pressure pain thresholds (PPT) of patients with myofacial pain dysfunction (MPD). PPT scores of patients with MPD (n = 37) and healthy individuals (n = 43) were measured on the hypothenar region of the hand using a mechanical algometer. Serum samples were collected and TAC and TOS were measured by novel methods. The TAC of patients was significantly lower than that of the control subjects. The difference between the TOS measurements of patients and control subjects was not significant. The PPT scores of the patients were significantly lower than that of control subjects. There may be an association between serum antioxidant capacity and MPD. Low serum TAC might also be related with pain perception.

  3. Application of the FOUR Score in Intracerebral Hemorrhage Risk Analysis.

    Science.gov (United States)

    Braksick, Sherri A; Hemphill, J Claude; Mandrekar, Jay; Wijdicks, Eelco F M; Fugate, Jennifer E

    2018-06-01

    The Full Outline of Unresponsiveness (FOUR) Score is a validated scale describing the essentials of a coma examination, including motor response, eye opening and eye movements, brainstem reflexes, and respiratory pattern. We incorporated the FOUR Score into the existing ICH Score and evaluated its accuracy of risk assessment in spontaneous intracerebral hemorrhage (ICH). Consecutive patients admitted to our institution from 2009 to 2012 with spontaneous ICH were reviewed. The ICH Score was calculated using patient age, hemorrhage location, hemorrhage volume, evidence of intraventricular extension, and Glasgow Coma Scale (GCS). The FOUR Score was then incorporated into the ICH Score as a substitute for the GCS (ICH Score FS ). The ability of the 2 scores to predict mortality at 1 month was then compared. In total, 274 patients met the inclusion criteria. The median age was 73 years (interquartile range 60-82) and 138 (50.4%) were male. Overall mortality at 1 month was 28.8% (n = 79). The area under the receiver operating characteristic curve was .91 for the ICH Score and .89 for the ICH Score FS . For ICH Scores of 1, 2, 3, 4, and 5, 1-month mortality was 4.2%, 29.9%, 62.5%, 95.0%, and 100%. In the ICH Score FS model, mortality was 10.7%, 26.5%, 64.5%, 88.9%, and 100% for scores of 1, 2, 3, 4, and 5, respectively. The ICH Score and the ICH Score FS predict 1-month mortality with comparable accuracy. As the FOUR Score provides additional clinical information regarding patient status, it may be a reasonable substitute for the GCS into the ICH Score. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  4. The Zhongshan Score

    Science.gov (United States)

    Zhou, Lin; Guo, Jianming; Wang, Hang; Wang, Guomin

    2015-01-01

    Abstract In the zero ischemia era of nephron-sparing surgery (NSS), a new anatomic classification system (ACS) is needed to adjust to these new surgical techniques. We devised a novel and simple ACS, and compared it with the RENAL and PADUA scores to predict the risk of NSS outcomes. We retrospectively evaluated 789 patients who underwent NSS with available imaging between January 2007 and July 2014. Demographic and clinical data were assessed. The Zhongshan (ZS) score consisted of three parameters. RENAL, PADUA, and ZS scores are divided into three groups, that is, high, moderate, and low scores. For operative time (OT), significant differences were seen between any two groups of ZS score and PADUA score (all P RENAL showed no significant difference between moderate and high complexity in OT, WIT, estimated blood loss, and increase in SCr. Compared with patients with a low score of ZS, those with a high or moderate score had 8.1-fold or 3.3-fold higher risk of surgical complications, respectively (all P RENAL score, patients with a high or moderate score had 5.7-fold or 1.9-fold higher risk of surgical complications, respectively (all P RENAL and PADUA scores. ZS score could be used to reflect the surgical complexity and predict the risk of surgical complications in patients undergoing NSS. PMID:25654399

  5. Zero-block mode decision algorithm for H.264/AVC.

    Science.gov (United States)

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  6. Reliability and Validity of Composite Scores from the NIH Toolbox Cognition Battery in Adults

    Science.gov (United States)

    Heaton, Robert K.; Akshoomoff, Natacha; Tulsky, David; Mungas, Dan; Weintraub, Sandra; Dikmen, Sureyya; Beaumont, Jennifer; Casaletto, Kaitlin B.; Conway, Kevin; Slotkin, Jerry; Gershon, Richard

    2014-01-01

    This study describes psychometric properties of the NIH Toolbox Cognition Battery (NIHTB-CB) Composite Scores in an adult sample. The NIHTB-CB was designed for use in epidemiologic studies and clinical trials for ages 3 to 85. A total of 268 self-described healthy adults were recruited at four university-based sites, using stratified sampling guidelines to target demographic variability for age (20–85 years), gender, education, and ethnicity. The NIHTB-CB contains seven computer-based instruments assessing five cognitive sub-domains: Language, Executive Function, Episodic Memory, Processing Speed, and Working Memory. Participants completed the NIHTB-CB, corresponding gold standard validation measures selected to tap the same cognitive abilities, and sociodemographic questionnaires. Three Composite Scores were derived for both the NIHTB-CB and gold standard batteries: “Crystallized Cognition Composite,” “Fluid Cognition Composite,” and “Total Cognition Composite” scores. NIHTB Composite Scores showed acceptable internal consistency (Cronbach’s alphas = 0.84 Crystallized, 0.83 Fluid, 0.77 Total), excellent test–retest reliability (r: 0.86–0.92), strong convergent (r: 0.78–0.90) and discriminant (r: 0.19–0.39) validities versus gold standard composites, and expected age effects (r = 0.18 crystallized, r = − 0.68 fluid, r = − 0.26 total). Significant relationships with self-reported prior school difficulties and current health status, employment, and presence of a disability provided evidence of external validity. The NIH Toolbox Cognition Battery Composite Scores have excellent reliability and validity, suggesting they can be used effectively in epidemiologic and clinical studies. PMID:24960398

  7. Validation of a 5-year risk score of hip fracture in postmenopausal women. The Danish Nurse Cohort Study

    DEFF Research Database (Denmark)

    Hundrup, Y A; Jacobsen, R K; Andreasen, A H

    2010-01-01

    We evaluated the Women's Health Initiative (WHI) hip fracture risk score in 15,648 postmenopausal Danish nurses. The algorithm was well calibrated for Denmark. However, the sensitivity was poor at common decision making thresholds. Obtaining sensitivity better than 80% led to a low specificity...... was to test the clinical performance of the algorithm in a large Danish cohort of postmenopausal Caucasian women against hip fracture. METHODS: The Danish Nurse Cohort is a prospective risk factor and hormone therapy (HT) study established in 1993. Participants in the present analysis were 15......,648 postmenopausal nurses. The calibration and diagnostic performance of the WHI algorithm was evaluated using fracture events captured in the Danish National Hospital Registry. RESULTS: During 5 years of follow-up, 122 participants suffered a hip fracture (1.8/1,000 person years). The WHI algorithm predicted...

  8. The variability in Oxford hip and knee scores in the preoperative period: is there an ideal time to score?

    Science.gov (United States)

    Quah, C; Holmes, D; Khan, T; Cockshott, S; Lewis, J; Stephen, A

    2018-01-01

    Background All NHS-funded providers are required to collect and report patient-reported outcome measures for hip and knee arthroplasty. Although there are established guidelines for timing such measures following arthroplasty, there are no specific time-points for collection in the preoperative period. The primary aim of this study was to identify whether there was a significant amount of variability in the Oxford hip and knee scores prior to surgical intervention when completed in the outpatient clinic at the time of listing for arthroplasty or when completed at the preoperative assessment clinic. Methods A prospective cohort study of patients listed for primary hip or knee arthroplasty was conducted. Patients were asked to fill in a preoperative Oxford score in the outpatient clinic at the time of listing. They were then invited to fill in the official outcome measures questionnaire at the preoperative assessment clinic. The postoperative Oxford score was then completed when the patient was seen again at their postoperative follow up in clinic. Results Of the total of 109 patients included in this study period, there were 18 (17%) who had a worse score of 4 or more points difference and 43 (39.4%) who had an improvement of 4 or more points difference when the scores were compared between time of listing at the outpatient and at the preoperative assessment clinic. There was a statistically significant difference (P = 0.0054) in the mean Oxford scores. Conclusions The results of our study suggest that there should be standardisation of timing for completing the preoperative patient-reported outcome measures.

  9. Lo Score: un algoritmo per investigare la Body Knowledge

    Directory of Open Access Journals (Sweden)

    Letizia Gioa Monda

    2015-03-01

    Full Text Available Both inside and outside theatre, body work is the first and principle aspect of work on the ‘self’. It is the first inescapable step in the path toward the aware action which in theatre is the “action in perception”, i.e. thinking-in-movement. This aware thought involves the whole background of the person. It is an action that arises only when the person is present and at the same time dynamically involved entirely in the environment: it is the perceptual consciousness that integrates the planning of the action with the execution of the same action. “Thinking-in-movement” is a dynamic process that is not possible to be codified: it is a language that asks to be experienced in order to be able to be understood and learned. Therefore, what is the Score? Is it a concept, a method of movement, or a digital tool? Perhaps it is each one of these things. The Score is an indispensable algorithm to read the dance that the human writes in order to obtain information, transfer them and so continue in the evolution of the body knowledge. The Score is the medium in which the human being structured tensions are shaped: i.e., an alternative site to understand the potential instigation of the human body and the organization of his residual actions. In this article I will read the score like a crystal: metaphor of the shaped dynamism innate in the expressive movement. I will explain how this crystal is the necessary channel to make sure the choreographic counterpoint can show up during the live performance. I will explore what this means for the choreographers involved in Motion Bank - William Forsythe, Deborah Hay, Jonathan Burrows and Matteo Fargion - analyzing the somatic connections between their dance-making and the score-creation.

  10. Scheduling Diet for Diabetes Mellitus Patients using Genetic Algorithm

    Science.gov (United States)

    Syahputra, M. F.; Felicia, V.; Rahmat, R. F.; Budiarto, R.

    2017-01-01

    Diabetes Melitus (DM) is one of metabolic diseases which affects on productivity and lowers the human resources quality. This disease can be controlled by maintaining and regulating balanced and healthy lifestyle especially for daily diet. However, nowadays, there is no system able to help DM patient to get any information of proper diet. Therefore, an approach is required to provide scheduling diet every day in a week with appropriate nutrition for DM patients to help them regulate their daily diet for healing this disease. In this research, we calculate the number of caloric needs using Harris-Benedict equation and propose genetic algorithm for scheduling diet for DM patient. The results show that the greater the number of individuals, the greater the more the possibility of changes in fitness score approaches the best fitness score. Moreover, the greater the created generation, the more the opportunites to obtain best individual with fitness score approaching 0 or equal to 0.

  11. Re-Scoring the Game’s Score

    DEFF Research Database (Denmark)

    Gasselseder, Hans-Peter

    2014-01-01

    This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self-report questionnai......This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self......-temporal alignment in the resulting emotional congruency of nondiegetic music. Whereas imaginary aspects of immersive presence are systemically affected by the presentation of dynamic music, sensory spatial aspects show higher sensitivity towards the arousal potential of the music score. It is argued...

  12. Virtual non-contrast CT using dual energy spectral CT: Feasibility of coronary artery calcium scoring

    International Nuclear Information System (INIS)

    Song, In Young; Yi, Jeong Geun; Park, Jeong Hee; Lee, Sung Mok; Lee, Kyung Soo; Chung, Myung Jin

    2016-01-01

    To evaluate the feasibility of coronary artery calcium scoring based on three virtual noncontrast-enhanced (VNC) images derived from single-source spectral dual-energy CT (DECT) as compared with true noncontrast-enhanced (TNC) images. This prospective study was conducted with the approval of our Institutional Review Board. Ninety-seven patients underwent noncontrast CT followed by contrast-enhanced chest CT using single-source spectral DECT. Iodine eliminated VNC images were reconstructed using two kinds of 2-material decomposition algorithms (material density iodine-water pair [MDW], material density iodine-calcium pair [MDC]) and a material suppressed algorithm (material suppressed iodine [MSI]). Two readers independently quantified calcium on VNC and TNC images. The Spearman correlation coefficient test and Bland-Altman method were used for statistical analyses. Coronary artery calcium scores from all three VNC images showed excellent correlation with those from the TNC images (Spearman's correlation coefficient [ρ] = 0.94, 0.88, and 0.89 for MDW, MDC, and MSI, respectively; p < 0.001 for all pairs). Measured coronary calcium volumes from VNC images also correlated well with those from TNC images (ρ = 0.92, 0.87, and 0.91 for MDW, MDC, and MSI, respectively; p < 0.001 for all pairs). Among the three VNC images, coronary calcium from MDW correlated best with that from TNC. The coronary artery calcium scores and volumes were significantly lower from the VNC images than from the TNC images (p < 0.001 for all pairs). The use of VNC images from contrast-enhanced CT using dual-energy material decomposition/suppression is feasible for coronary calcium scoring. The absolute value from VNC tends to be smaller than that from TNC

  13. Virtual non-contrast CT using dual energy spectral CT: Feasibility of coronary artery calcium scoring

    Energy Technology Data Exchange (ETDEWEB)

    Song, In Young; Yi, Jeong Geun; Park, Jeong Hee [Dept. of Radiology, Konkuk University School of Medicine, Seoul (Korea, Republic of); Lee, Sung Mok; Lee, Kyung Soo; Chung, Myung Jin [Dept. of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2016-06-15

    To evaluate the feasibility of coronary artery calcium scoring based on three virtual noncontrast-enhanced (VNC) images derived from single-source spectral dual-energy CT (DECT) as compared with true noncontrast-enhanced (TNC) images. This prospective study was conducted with the approval of our Institutional Review Board. Ninety-seven patients underwent noncontrast CT followed by contrast-enhanced chest CT using single-source spectral DECT. Iodine eliminated VNC images were reconstructed using two kinds of 2-material decomposition algorithms (material density iodine-water pair [MDW], material density iodine-calcium pair [MDC]) and a material suppressed algorithm (material suppressed iodine [MSI]). Two readers independently quantified calcium on VNC and TNC images. The Spearman correlation coefficient test and Bland-Altman method were used for statistical analyses. Coronary artery calcium scores from all three VNC images showed excellent correlation with those from the TNC images (Spearman's correlation coefficient [ρ] = 0.94, 0.88, and 0.89 for MDW, MDC, and MSI, respectively; p < 0.001 for all pairs). Measured coronary calcium volumes from VNC images also correlated well with those from TNC images (ρ = 0.92, 0.87, and 0.91 for MDW, MDC, and MSI, respectively; p < 0.001 for all pairs). Among the three VNC images, coronary calcium from MDW correlated best with that from TNC. The coronary artery calcium scores and volumes were significantly lower from the VNC images than from the TNC images (p < 0.001 for all pairs). The use of VNC images from contrast-enhanced CT using dual-energy material decomposition/suppression is feasible for coronary calcium scoring. The absolute value from VNC tends to be smaller than that from TNC.

  14. A procalcitonin-based algorithm to guide antibiotic therapy in secondary peritonitis following emergency surgery: a prospective study with propensity score matching analysis.

    Science.gov (United States)

    Huang, Ting-Shuo; Huang, Shie-Shian; Shyu, Yu-Chiau; Lee, Chun-Hui; Jwo, Shyh-Chuan; Chen, Pei-Jer; Chen, Huang-Yang

    2014-01-01

    Procalcitonin (PCT)-based algorithms have been used to guide antibiotic therapy in several clinical settings. However, evidence supporting PCT-based algorithms for secondary peritonitis after emergency surgery is scanty. In this study, we aimed to investigate whether a PCT-based algorithm could safely reduce antibiotic exposure in this population. From April 2012 to March 2013, patients that had secondary peritonitis diagnosed at the emergency department and underwent emergency surgery were screened for eligibility. PCT levels were obtained pre-operatively, on post-operative days 1, 3, 5, and 7, and on subsequent days if needed. Antibiotics were discontinued if PCT was Advanced age, coexisting pulmonary diseases, and higher severity of illness were significantly associated with longer durations of antibiotic use. The PCT-based algorithm safely reduces antibiotic exposure in this study. Further randomized trials are needed to confirm our findings and incorporate cost-effectiveness analysis. Australian New Zealand Clinical Trials Registry ACTRN12612000601831.

  15. Validation of a 5-year risk score of hip fracture in postmenopausal women. The Danish Nurse Cohort Study

    DEFF Research Database (Denmark)

    Hundrup, Y A; Jacobsen, R K; Andreasen, A H

    2010-01-01

    We evaluated the Women's Health Initiative (WHI) hip fracture risk score in 15,648 postmenopausal Danish nurses. The algorithm was well calibrated for Denmark. However, the sensitivity was poor at common decision making thresholds. Obtaining sensitivity better than 80% led to a low specificity...

  16. JOURNAL CLUB: Plagiarism in Manuscripts Submitted to the AJR: Development of an Optimal Screening Algorithm and Management Pathways.

    Science.gov (United States)

    Taylor, Donna B

    2017-04-01

    The objective of this study was to investigate the incidence of plagiarism in a sample of manuscripts submitted to the AJR using CrossCheck, develop an algorithm to identify significant plagiarism, and formulate management pathways. A sample of 110 of 1610 (6.8%) manuscripts submitted to AJR in 2014 in the categories of Original Research or Review were analyzed using CrossCheck and manual assessment. The overall similarity index (OSI), highest similarity score from a single source, whether duplication was from single or multiple origins, journal section, and presence or absence of referencing the source were recorded. The criteria outlined by the International Committee of Medical Journal Editors were the reference standard for identifying manuscripts containing plagiarism. Statistical analysis was used to develop a screening algorithm to maximize sensitivity and specificity for the detection of plagiarism. Criteria for defining the severity of plagiarism and management pathways based on the severity of the plagiarism were determined. Twelve manuscripts (10.9%) contained plagiarism. Nine had an OSI excluding quotations and references of less than 20%. In seven, the highest similarity score from a single source was less than 10%. The highest similarity score from a single source was the work of the same author or authors in nine. Common sections for duplication were the Materials and Methods, Discussion, and abstract. Referencing the original source was lacking in 11. Plagiarism was undetected at submission in five of these 12 articles; two had been accepted for publication. The most effective screening algorithm was to average the OSI including quotations and references and the highest similarity score from a single source and to submit manuscripts with an average value of more than 12% for further review. The current methods for detecting plagiarism are suboptimal. A new screening algorithm is proposed.

  17. College Math Assessment: SAT Scores vs. College Math Placement Scores

    Science.gov (United States)

    Foley-Peres, Kathleen; Poirier, Dawn

    2008-01-01

    Many colleges and university's use SAT math scores or math placement tests to place students in the appropriate math course. This study compares the use of math placement scores and SAT scores for 188 freshman students. The student's grades and faculty observations were analyzed to determine if the SAT scores and/or college math assessment scores…

  18. The relationship of live animal muscular and skeletal scores, ultrasound measurements and carcass classification scores with carcass composition and value in steers.

    Science.gov (United States)

    Conroy, S B; Drennan, M J; Kenny, D A; McGee, M

    2009-11-01

    scores/measurements and carcass classification scores, explained an appreciable amount of the total variation in carcass meat, fat and bone proportions and carcass value, and a moderate amount of the variation in proportion of high-value meat cuts in the carcass.

  19. Gleason Score Correlation Between Prostate Biopsy and Radical Prostatectomy Specimens

    Directory of Open Access Journals (Sweden)

    Erdem Öztürk

    2018-04-01

    Full Text Available Objective: Prostate cancer is the most common malignancy in men and the second cause of cancer-related mortality. Prostate biopsy and the Gleason score guide treatment decisions in prostate cancer. Several studies have investigated the correlation between biopsy scores and radical prostatectomy specimen scores. We also evaluated the correlation of Gleason scores of these specimens in our patient series. Materials and Methods: We retrospectively reviewed the data of 468 men who were diagnosed with prostate cancer and underwent radical prostatectomy between 2008 and 2017. Patients’ age, prostate-specific antigen levels at diagnosis, and prostate biopsy and radical prostatectomy specimen Gleason scores were recorded. Upgrading and downgrading were defined as increase or decrease of Gleason score of radical prostate specimen compared to Gleason score of prostate biopsy. Results: A total of 442 men diagnosed with prostate cancer were included in the study. The mean age of the patients was 62.62±6.26 years (44-84 years and mean prostate specific antigen level was 9.01±6.84 ng/mL (1.09-49 ng/mL. Prostate biopsy Gleason score was 7 in 27 (6.1% men. Radical prostatectomy specimen Gleason score was 7 in 62 (14% men. Gleason correlation was highest in the 240 patients (71.6% with score <7 and was lowest in the 31 (38.75% patients with score =7. Conclusion: This study demonstrated that the discordance rate between Gleason scores of prostate biopsy and radical prostatectomy specimens was 35.7%.

  20. The effect of different EEG derivations on sleep staging in rats: the frontal midline–parietal bipolar electrode for sleep scoring

    International Nuclear Information System (INIS)

    Fang, Guangzhan; Zhang, Chunpeng; Xia, Yang; Lai, Yongxiu; Liu, Tiejun; You, Zili; Yao, Dezhong

    2009-01-01

    Most sleep-staging research has focused on developing and optimizing algorithms for sleep scoring, but little attention has been paid to the effect of different electroencephalogram (EEG) derivations on sleep staging. To explore the possible effects of EEG derivations, an automatic computer method was established and confirmed by agreement analysis between the computer and two independent raters, and four fronto-parietal bipolar leads were compared for sleep scoring in rats. The results demonstrated that different bipolar electrodes have significantly different sleep-staging accuracies, and that the optimal frontal electrode for sleep scoring is located at the anterior midline

  1. Prognostic value of FOUR and GCS scores in determining mortality in patients with traumatic brain injury.

    Science.gov (United States)

    Saika, Amrit; Bansal, Sonia; Philip, Mariamma; Devi, Bhagavatula Indira; Shukla, Dhaval P

    2015-09-01

    The Glasgow Coma Scale (GCS) is considered the gold standard for assessment of unconsciousness in patients with traumatic brain injury (TBI) against which other scales are compared. To overcome the disadvantages of GCS, the Full Outline Of Unresponsiveness (FOUR) score was proposed. We aimed to compare the predictability of FOUR score and GCS for early mortality, after moderate and severe TBI. This is a prospective observational study of patients with moderate and severe TBI. Both FOUR and GCS scores were determined at admission. The primary outcome was mortality at the end of 2 weeks of injury. A total of 138 (117 males) patients were included in the study. Out of these, 17 (12.3 %) patients died within 2 weeks of injury. The mean GCS and FOUR scores were 9.5 (range, 3-13) and 11 (0-16), respectively. The total GCS and FOUR scores were significantly lower in patients who did not survive. At a cut-off score of 7 for FOUR score, the AUC was 0.97, with sensitivity of 97.5 and specificity of 88.2 % (p FOUR scores. The predictive value of the FOUR score on admission of patients with TBI is no better than the GCS score.

  2. Enhanced clinical pharmacy service targeting tools: risk-predictive algorithms.

    Science.gov (United States)

    El Hajji, Feras W D; Scullin, Claire; Scott, Michael G; McElnay, James C

    2015-04-01

    This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes. Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database. Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI). Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized. © 2014 John Wiley & Sons, Ltd.

  3. Comparisons of American, Israeli, Italian and Mexican physicians and nurses on the total and factor scores of the Jefferson scale of attitudes toward physician-nurse collaborative relationships.

    Science.gov (United States)

    Hojat, Mohammadreza; Gonnella, Joseph S; Nasca, Thomas J; Fields, Sylvia K; Cicchetti, Americo; Lo Scalzo, Alessandra; Taroni, Francesco; Amicosante, Anna Maria Vincenza; Macinati, Manuela; Tangucci, Massimo; Liva, Carlo; Ricciardi, Gualtiero; Eidelman, Shmuel; Admi, Hanna; Geva, Hana; Mashiach, Tanya; Alroy, Gideon; Alcorta-Gonzalez, Adelina; Ibarra, David; Torres-Ruiz, Antonio

    2003-05-01

    This cross-cultural study was designed to compare the attitudes of physicians and nurses toward physician-nurse collaboration in the United States, Israel, Italy and Mexico. Total participants were 2522 physicians and nurses who completed the Jefferson Scale of Attitudes Toward Physician-Nurse Collaboration (15 Likert-type items, (Hojat et al., Evaluation and the Health Professions 22 (1999a) 208; Nursing Research 50 (2001) 123). They were compared on the total scores and four factors of the Jefferson Scale (shared education and team work, caring as opposed to curing, nurses, autonomy, physicians' dominance). Results showed inter- and intra-cultural similarities and differences among the study groups providing support for the social role theory (Hardy and Conway, Role Theory: Perspectives for Health Professionals, Appelton-Century-Crofts, New York, 1978) and the principle of least interest (Waller and Hill, The Family: A Dynamic Interpretation, Dryden, New York, 1951) in inter-professional relationships. Implications for promoting physician-nurse education and inter-professional collaboration are discussed.

  4. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  5. Total and Conceptual Vocabulary in Spanish–English Bilinguals From 22 to 30 Months: Implications for Assessment

    Science.gov (United States)

    Core, Cynthia; Hoff, Erika; Rumiche, Rosario; Señor, Melissa

    2015-01-01

    Purpose Vocabulary assessment holds promise as a way to identify young bilingual children at risk for language delay. This study compares 2 measures of vocabulary in a group of young Spanish–English bilingual children to a single-language measure used with monolingual children. Method Total vocabulary and conceptual vocabulary were used to measure mean vocabulary size and growth in 47 Spanish–English bilingually developing children from 22 to 30 months of age based on results from the MacArthur–Bates Communicative Development Inventory (CDI; Fenson et al., 1993) and the Inventario del Desarrollo de Habilidades Comunicativas (Jackson-Maldonado et al., 2003). Bilingual children’s scores of total vocabulary and conceptual vocabulary were compared with CDI scores for a control group of 56 monolingual children. Results The total vocabulary measure resulted in mean vocabulary scores and average rate of growth similar to monolingual growth, whereas conceptual vocabulary scores were significantly smaller and grew at a slower rate than total vocabulary scores. Total vocabulary identified the same proportion of bilingual children below the 25th percentile on monolingual norms as the CDI did for monolingual children. Conclusion These results support the use of total vocabulary as a means of assessing early language development in young bilingual Spanish–English speaking children. PMID:24023382

  6. Outcomes of total knee arthroplasty in relation to preoperative patient-reported and radiographic measures: data from the osteoarthritis initiative.

    Science.gov (United States)

    Kahn, Timothy L; Soheili, Aydin; Schwarzkopf, Ran

    2013-12-01

    Total knee arthroplasty (TKA) is the preferred surgical treatment for end-stage osteoarthritis. However, substantial numbers of patients still experience poor outcomes. Consequently, it is important to identify which patient characteristics are predictive of outcomes in order to guide clinical decisions. Our hypothesis is that preoperative patient-reported outcome measures and radiographic measures may help to predict TKA outcomes. Using cohort data from the Osteoarthritis Initiative, we studied 172 patients who underwent TKA. For each patient, we compiled pre- and postoperative Western Ontario and McMaster University Arthritis Index (WOMAC) scores. Radiographs were measured for knee joint angles, femorotibial angles, anatomical lateral distal femoral angles, and anatomical medial proximal tibial angles; Kellgren and Lawrence (KL) grades were assigned to each compartment of the knee. All studied measurements were compared to WOMAC outcomes. Preoperative WOMAC disability, pain, and total scores were positively associated with postoperative WOMAC total scores (P = .010, P = .010, and P = .009, respectively) and were associated with improvement in WOMAC total scores (P < .001, P < .001, and P < .001, respectively). For radiographic measurements, preoperative joint angles were positively associated with improvements in postoperative WOMAC total scores (P = .044). Combined KL grades (medial and lateral compartments) were negatively correlated with postoperative WOMAC disability and pain scores (P = .045 and P = .044) and were positively correlated with improvements in WOMAC total scores (P = .001). All preoperative WOMAC scores demonstrated positive associations with postoperative WOMAC scores, while among the preoperative radiographic measurements only combined KL grades and joint angles showed any correlation with postoperative WOMAC scores. Higher preoperative KL grades and joint angles were associated with better (lower) postoperative WOMAC scores, demonstrating an

  7. ASSESSMENT OF PERFORMANCES OF VARIOUS MACHINE LEARNING ALGORITHMS DURING AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-07-01

    Full Text Available Automation of descriptive answers evaluation is the need of the hour because of the huge increase in the number of students enrolling each year in educational institutions and the limited staff available to spare their time for evaluations. In this paper, we use a machine learning workbench called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. We attempted to identify the best supervised machine learning algorithm given a limited training set sample size scenario. We evaluated performances of Bayes, SVM, Logistic Regression, Random forests, Decision stump and Decision trees algorithms. We confirmed SVM as best performing algorithm based on quantitative measurements across accuracy, kappa, training speed and prediction accuracy with supplied test set.

  8. A statistical mechanical interpretation of algorithmic information theory: Total statistical mechanical interpretation based on physical argument

    International Nuclear Information System (INIS)

    Tadaki, Kohtaro

    2010-01-01

    The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T is an element of (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.

  9. Applications of the gambling score in evaluating earthquake predictions and forecasts

    Science.gov (United States)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  10. Assessing spelling in kindergarten: further comparison of scoring metrics and their relation to reading skills.

    Science.gov (United States)

    Clemens, Nathan H; Oslund, Eric L; Simmons, Leslie E; Simmons, Deborah

    2014-02-01

    Early reading and spelling development share foundational skills, yet spelling assessment is underutilized in evaluating early reading. This study extended research comparing the degree to which methods for scoring spelling skills at the end of kindergarten were associated with reading skills measured at the same time as well as at the end of first grade. Five strategies for scoring spelling responses were compared: totaling the number of words spelled correctly, totaling the number of correct letter sounds, totaling the number of correct letter sequences, using a rubric for scoring invented spellings, and calculating the Spelling Sensitivity Score (Masterson & Apel, 2010b). Students (N=287) who were identified at kindergarten entry as at risk for reading difficulty and who had received supplemental reading intervention were administered a standardized spelling assessment in the spring of kindergarten, and measures of phonological awareness, decoding, word recognition, and reading fluency were administered concurrently and at the end of first grade. The five spelling scoring metrics were similar in their strong relations with factors summarizing reading subskills (phonological awareness, decoding, and word reading) on a concurrent basis. Furthermore, when predicting first-grade reading skills based on spring-of-kindergarten performance, spelling scores from all five metrics explained unique variance over the autoregressive effects of kindergarten word identification. The practical advantages of using a brief spelling assessment for early reading evaluation and the relative tradeoffs of each scoring metric are discussed. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  11. An efficient and robust algorithm for parallel groupwise registration of bone surfaces

    NARCIS (Netherlands)

    van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.

    2012-01-01

    In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm

  12. A robust firearm identification algorithm of forensic ballistics specimens

    Science.gov (United States)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  13. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    Science.gov (United States)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  14. Exploring a Source of Uneven Score Equity across the Test Score Range

    Science.gov (United States)

    Huggins-Manley, Anne Corinne; Qiu, Yuxi; Penfield, Randall D.

    2018-01-01

    Score equity assessment (SEA) refers to an examination of population invariance of equating across two or more subpopulations of test examinees. Previous SEA studies have shown that score equity may be present for examinees scoring at particular test score ranges but absent for examinees scoring at other score ranges. No studies to date have…

  15. SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Jing, J; Lin, H [Hefei University of Technology, Hefei, Anhui (China); Chow, J [Princess Margaret Hospital, Toronto, ON (Canada)

    2015-06-15

    Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensity of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the

  16. SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT

    International Nuclear Information System (INIS)

    Jing, J; Lin, H; Chow, J

    2015-01-01

    Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensity of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the

  17. Correlated physical and mental health summary scores for the SF-36 and SF-12 Health Survey, V.1

    Directory of Open Access Journals (Sweden)

    Cunningham William E

    2007-09-01

    Full Text Available Abstract Background The SF-36 and SF-12 summary scores were derived using an uncorrelated (orthogonal factor solution. We estimate SF-36 and SF-12 summary scores using a correlated (oblique physical and mental health factor model. Methods We administered the SF-36 to 7,093 patients who received medical care from an independent association of 48 physician groups in the western United States. Correlated physical health (PCSc and mental health (MCSc scores were constructed by multiplying each SF-36 scale z-score by its respective scoring coefficient from the obliquely rotated two factor solution. PCSc-12 and MCSc-12 scores were estimated using an approach similar to the one used to derive the original SF-12 summary scores. Results The estimated correlation between SF-36 PCSc and MCSc scores was 0.62. There were far fewer negative factor scoring coefficients for the oblique factor solution compared to the factor scoring coefficients produced by the standard orthogonal factor solution. Similar results were found for PCSc-12, and MCSc-12 summary scores. Conclusion Correlated physical and mental health summary scores for the SF-36 and SF-12 derived from an obliquely rotated factor solution should be used along with the uncorrelated summary scores. The new scoring algorithm can reduce inconsistent results between the SF-36 scale scores and physical and mental health summary scores reported in some prior studies. (Subscripts C = correlated and UC = uncorrelated

  18. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng

    2014-01-01

    Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new

  19. An algorithm to assess methodological quality of nutrition and mortality cross-sectional surveys: development and application to surveys conducted in Darfur, Sudan.

    Science.gov (United States)

    Prudhon, Claudine; de Radiguès, Xavier; Dale, Nancy; Checchi, Francesco

    2011-11-09

    Nutrition and mortality surveys are the main tools whereby evidence on the health status of populations affected by disasters and armed conflict is quantified and monitored over time. Several reviews have consistently revealed a lack of rigor in many surveys. We describe an algorithm for analyzing nutritional and mortality survey reports to identify a comprehensive range of errors that may result in sampling, response, or measurement biases and score quality. We apply the algorithm to surveys conducted in Darfur, Sudan. We developed an algorithm based on internationally agreed upon methods and best practices. Penalties are attributed for a list of errors, and an overall score is built from the summation of penalties accrued by the survey as a whole. To test the algorithm reproducibility, it was independently applied by three raters on 30 randomly selected survey reports. The algorithm was further applied to more than 100 surveys conducted in Darfur, Sudan. The Intra Class Correlation coefficient was 0.79 for mortality surveys and 0.78 for nutrition surveys. The overall median quality score and range of about 100 surveys conducted in Darfur were 0.60 (0.12-0.93) and 0.675 (0.23-0.86) for mortality and nutrition surveys, respectively. They varied between the organizations conducting the surveys, with no major trend over time. Our study suggests that it is possible to systematically assess quality of surveys and reveals considerable problems with the quality of nutritional and particularly mortality surveys conducted in the Darfur crisis.

  20. Performance of a novel clinical score, the Pediatric Asthma Severity Score (PASS), in the evaluation of acute asthma.

    Science.gov (United States)

    Gorelick, Marc H; Stevens, Molly W; Schultz, Theresa R; Scribano, Philip V

    2004-01-01

    To evaluate the reliability, validity, and responsiveness of a new clinical asthma score, the Pediatric Asthma Severity Score (PASS), in children aged 1 through 18 years in an acute clinical setting. This was a prospective cohort study of children treated for acute asthma at two urban pediatric emergency departments (EDs). A total of 852 patients were enrolled at one site and 369 at the second site. Clinical findings were assessed at the start of the ED visit, after one hour of treatment, and at the time of disposition. Peak expiratory flow rate (PEFR) (for patients aged 6 years and older) and pulse oximetry were also measured. Composite scores including three, four, or five clinical findings were evaluated, and the three-item score (wheezing, prolonged expiration, and work of breathing) was selected as the PASS. Interobserver reliability for the PASS was good to excellent (kappa = 0.72 to 0.83). There was a significant correlation between PASS and PEFR (r = 0.27 to 0.37) and pulse oximetry (r = 0.29 to 0.41) at various time points. The PASS was able to discriminate between those patients who did and did not require hospitalization, with area under the receiver operating characteristic curve of 0.82. Finally, the PASS was shown to be responsive, with a 48% relative increase in score from start to end of treatment and an overall effect size of 0.62, indicating a moderate to large effect. This clinical score, the PASS, based on three clinical findings, is a reliable and valid measure of asthma severity in children and shows both discriminative and responsive properties. The PASS may be a useful tool to assess acute asthma severity for clinical and research purposes.

  1. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    Science.gov (United States)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  2. Clinical usefulness of multidetector-row CT to evaluate coronary artery calcium score in type 2 diabetes

    International Nuclear Information System (INIS)

    Nishioka, Makiko; Sakuma, Toru; Sano, Hironari; Utsunomiya, Kazunori; Agata, Toshihiko; Shimizu, Keisuke; Tajima, Naoko

    2004-01-01

    According to recent studies, multidetector-row CT (MDCT) with a retrospective electrocardiogram (ECG)-gating reconstruction algorithm shows a high correlation with coronary artery calcium score determined using electron-beam CT. Diabetes leads to many macrovascular complications, including coronary artery disease. The aim of this study was to evaluate risk factors for cardiac macroangiopathy in type 2 diabetes using MDCT. An observational cross-sectional study was performed in 90 patients with diabetes mellitus. Coronary calcium data was acquired by MDCT (SOMATOM Volume Zoom, Siemens AG, Medical Solutions, Germany). Physical examinations, laboratory data, glycemic control, and control of other risk factors were analyzed. The coronary artery calcium score increased with age. Multivariant analysis revealed that the coronary calcium score was closely correlated with electrocardiogram evaluation and control of hypertension. Coronary artery calcium score as determined by MDCT can be used as a screening radiological examination for cardiac macroangiopathy in diabetes patients with electrocardiogram abnormality and hypertension. (author)

  3. A new comparison of hyperspectral anomaly detection algorithms for real-time applications

    Science.gov (United States)

    Díaz, María.; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by

  4. Evaluation of Total Cardiovascular Risk in Patients with Hypertension and Impaired Glucose Tolerance

    Directory of Open Access Journals (Sweden)

    I.V. Cherniavska

    2016-11-01

    Full Text Available Aim. Timely reveal of the patients at high risk of cardiovascular diseases for whom earlier intervention for cardiovascular risk correction is the most effective. Materials and methods. Seventy patients aged 30–55 years old with stage 2 hypertension, impaired glucose tolerance (IGT and high cardiovascular risk were examined according to Framingham criteria. Cardiovascular risk was compared by SCORE and PROCAM results. Results. Percentage ratio of males with high cardiovascular risk was higher by 52.3 % in comparison to females by SCORE and by 2.3 % in comparison to females by PROCAM. Males did not present any significant discrepancy by evaluation of cardiovascular risk by both scores unlike females. Obtained results showed that total cardiovascular risk in females was twofold higher by PROCAM compared to SCORE scale. Conclusions. Total cardiovascular risk level in patients with stage 2 hypertension and IGT is influenced by age, systolic blood pressure level, smoking, lipid storage disease and carbohydrate metabolism disorder. When we evaluate total cardiovascular risk, we should not be limited only by determination of factors determined in SCORE. It is reasonable to evaluate risk factors by PROCAM, too, especially for females.

  5. A clinical algorithm for triaging patients with significant lymphadenopathy in primary health care settings in Sudan

    Directory of Open Access Journals (Sweden)

    Eltahir A.G. Khalil

    2013-06-01

    Full Text Available Background: Tuberculosis is a major health problem in developing countries. The distinction between tuberculous lymphadenitis, non-specific lymphadenitis and malignant lymph node enlargement has to be made at primary health care levels using easy, simple and cheap methods. Objective: To develop a reliable clinical algorithm for primary care settings to triage cases ofnon-specific, tuberculous and malignant lymphadenopathies. Methods: Calculation of the odd ratios (OR of the chosen predictor variables was carried out using logistic regression. The numerical score values of the predictor variables were weighed against their respective OR. The performance of the score was evaluated by the ROC (ReceiverOperator Characteristic curve. Results: Four predictor variables; Mantoux reading, erythrocytes sedimentation rate (ESR,nocturnal fever and discharging sinuses correlated significantly with TB diagnosis and were included in the reduced model to establish score A. For score B, the reduced model included Mantoux reading, ESR, lymph-node size and lymph-node number as predictor variables for malignant lymph nodes. Score A ranged 0 to 12 and a cut-off point of 6 gave a best sensitivity and specificity of 91% and 90% respectively, whilst score B ranged -3 to 8 and a cut-off point of3 gave a best sensitivity and specificity of 83% and 76% respectively. The calculated area underthe ROC curve was 0.964 (95% CI, 0.949 – 0.980 and -0.856 (95% CI, 0.787 ‑ 0.925 for scores Aand B respectively, indicating good performance. Conclusion: The developed algorithm can efficiently triage cases with tuberculous andmalignant lymphadenopathies for treatment or referral to specialised centres for furtherwork-up.

  6. An improved filtering algorithm for big read datasets and its application to single-cell assembly.

    Science.gov (United States)

    Wedemeyer, Axel; Kliemann, Lasse; Srivastav, Anand; Schielke, Christian; Reusch, Thorsten B; Rosenstiel, Philip

    2017-07-03

    For single-cell or metagenomic sequencing projects, it is necessary to sequence with a very high mean coverage in order to make sure that all parts of the sample DNA get covered by the reads produced. This leads to huge datasets with lots of redundant data. A filtering of this data prior to assembly is advisable. Brown et al. (2012) presented the algorithm Diginorm for this purpose, which filters reads based on the abundance of their k-mers. We present Bignorm, a faster and quality-conscious read filtering algorithm. An important new algorithmic feature is the use of phred quality scores together with a detailed analysis of the k-mer counts to decide which reads to keep. We qualify and recommend parameters for our new read filtering algorithm. Guided by these parameters, we remove in terms of median 97.15% of the reads while keeping the mean phred score of the filtered dataset high. Using the SDAdes assembler, we produce assemblies of high quality from these filtered datasets in a fraction of the time needed for an assembly from the datasets filtered with Diginorm. We conclude that read filtering is a practical and efficient method for reducing read data and for speeding up the assembly process. This applies not only for single cell assembly, as shown in this paper, but also to other projects with high mean coverage datasets like metagenomic sequencing projects. Our Bignorm algorithm allows assemblies of competitive quality in comparison to Diginorm, while being much faster. Bignorm is available for download at https://git.informatik.uni-kiel.de/axw/Bignorm .

  7. Improving Citation Network Scoring by Incorporating Author and Program Committee Reputation

    Directory of Open Access Journals (Sweden)

    Dineshi Peiris

    2016-06-01

    Full Text Available Publication venues play an important role in the scholarly communication process. The number of publication venues has been increasing yearly, making it difficult for researchers to determine the most suitable venue for their publication. Most existing methods use citation count as the metric to measure the reputation of publication venues. However, this does not take into account the quality of citations. Therefore, it is vital to have a publication venue quality estimation mechanism. The ultimate goal of this research project is to develop a novel approach for ranking publication venues by considering publication history. The main aim of this research work is to propose a mechanism to identify the key Computer Science journals and conferences from various fields of research. Our approach is completely based on the citation network represented by publications. A modified version of the PageRank algorithm is used to compute the ranking scores for each publication. In our publication ranking method, there are many aspects that contribute to the importance of a publication, including the number of citations, the rating of the citing publications, the time metric and the authors’ reputation. Known publication venue scores have been formulated by using the scores of the publications. New publication venue ranking is taken care by the scores of Program Committee members which derive from their ranking scores as authors. Experimental results show that our publication ranking method reduces the bias against more recent publications, while also providing a more accurate way to determine publication quality.

  8. Do we need a gender-specific total knee replacement?

    DEFF Research Database (Denmark)

    Thomsen, M G; Husted, H; Bencke, J

    2012-01-01

    The purpose of this study was to investigate whether a gender-specific high-flexion posterior-stabilised (PS) total knee replacement (TKR) would offer advantages over a high-flex PS TKR regarding range of movement (ROM), 'feel' of the knee, pain and satisfaction, as well as during activity. A total......, satisfaction and daily 'feel' of each knee. Patients underwent gait analysis pre-operatively and at one year, which yielded kinematic, kinetic and temporospatial parameters indicative of knee function during gait. At final follow-up we found no statistically significant differences in ROM (p = 0.......82). The median pain score was 0 (0 to 8) in both groups (p = 0.95). The median satisfaction score was 9 (4 to 10) in the high-flex group and 8 (0 to 10) in the gender-specific group (p = 0.98). The median 'feel' score was 9 (3 to 10) in the high-flex group and 8 (0 to 10) in the gender-specific group (p = 0...

  9. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  10. Evaluation of algorithms used to order markers on genetic maps.

    Science.gov (United States)

    Mollinari, M; Margarido, G R A; Vencovsky, R; Garcia, A A F

    2009-12-01

    When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with 100 and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results.

  11. EVALUATION OF WEB SEARCHING METHOD USING A NOVEL WPRR ALGORITHM FOR TWO DIFFERENT CASE STUDIES

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2012-04-01

    Full Text Available The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of web pages meets the information need of the user. Page Rank, Weighted Page Rank and Hypertext Induced Topic Selection (HITS are existing algorithms which considers only web structure mining. Vector Space Model (VSM, Cover Density Ranking (CDR, Okapi similarity measurement (Okapi and Three-Level Scoring method (TLS are some of existing relevancy score methods which consider only web content mining. In this paper, we propose a new algorithm, Weighted Page with Relevant Rank (WPRR which is blend of both web content mining and web structure mining that demonstrates the relevancy of the page with respect to given query for two different case scenarios. It is shown that WPRR’s performance is better than the existing algorithms.

  12. Poor WOMAC scores in contralateral knee negatively impact TKA outcomes: data from the osteoarthritis initiative.

    Science.gov (United States)

    Kahn, Timothy L; Soheili, Aydin C; Schwarzkopf, Ran

    2014-08-01

    While total knee arthroplasty (TKA) has been shown to have excellent outcomes, a significant proportion of patients experience relatively poor post-operative function. In this study, we test the hypothesis that the level of osteoarthritic symptoms in the contralateral knee at the time of TKA is associated with poorer post-operative outcomes in the operated knee. Using longitudinal cohort data from the Osteoarthritis Initiative (OAI), we included 171 patients who received a unilateral TKA. We compared pre-operative Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) scores in the contralateral knee to post-operative WOMAC scores in the index knee. Pre-operative contralateral knee WOMAC scores were associated with post-operative index knee WOMAC Total scores, indicating that the health of the pre-operative contralateral knee is a significant factor in TKA outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. [Prevention and treatment of perioperative period complication of total ankle replacement].

    Science.gov (United States)

    Liao, Xiang; Gao, Zhizeng; Huang, Shanhu; Yang, Shuhua

    2008-01-01

    To explore the cause of the perioprative period complication of scandinavian total ankle replacement (STAR) and to summarize the experience in the treatment and prevention. From March 1999 to November 2006, 35 patients were given total ankle replacement (TAR) with STAR system. There were 19 males and 16 females with an average age of 50.5 years (27 to 68 years), including 12 cases of posttraumatic arthritis, 8 cases of osteoarthritis and 15 cases of rheumatoid arthritis. All patients had pain of ankle joint, swelling and limitation of joint motion. The disease course was 9-64 months. The curative effect was estimated by Kofoed total ankle scoring system. The mean preoperative ankle score was 29 (6-48); the mean pain score was 18.3 (0-35); the mean function score was 11.7 (6-18); and the mean activity score was 9.2 (3-12). The type of all complications were record, and its cause, prevetion and treatment were analyszed. Thirty-three patients achieved healing by first intention, 2 achieved delayed union because of infection. Twenty-eight patients were followed up 3-80 months (mean 43.5 months). Medial malleolus fracture occurred in 2 cases, unstable ankle joint introversion in 2 cases, limitation of ankle dorsiextension in 1 case and 1 case had hypoesthesia at intermediate dorsal skin of foot and 3rd-5th metatarsal skin without obvious dysfunction; all were treated with symptomatic medication. The postoperative mean ankle score was 85.5 (58-95); the mean pain score was 48.3 (35-50); the mean function score was 20.7 (18-30); the mean activity score was 17.2 (16-20). There were statistically significant differences when compared with preoperative score (P<0.01). The clinical results were excellent in 16 patients, good in 9 patients and fair in 3 patients. The X-ray films showed no loosening and subsidence of prosthesis. Although STAR can retain the functions of the operated joint, it has its special complications. It is important to obey operation principle with

  14. Where to Sit? Type of Sitting Matters for the Framingham Cardiovascular Risk Score

    Directory of Open Access Journals (Sweden)

    Katja Borodulin

    2016-08-01

    Full Text Available Background: Current evidence on associations of type-specific sedentary behavior with cardiovascular disease (CVD is limited to mainly screen-time sedentary behavior (SB. We aimed to study the associations of type-specific and total time spent sitting with the Framingham 10-year cardiovascular disease risk score (Framingham score in Finnish adults. Methods: Data comprise the National FINRISK 2007 and 2012 health examination surveys with 10,185 participants aged 25-74 years, apparently free of CVD. Participants reported average daily time spent sitting in different locations: work-related sitting, at home in front of television (TV, at home in front of computer, in a vehicle, and elsewhere. Total SB time was calculated from these context-specific self-reports. Accelerometer-based sedentary time was assessed in 988 FINRISK 2012 participants. Framingham score was calculated using information on blood pressure and its medication, cholesterol levels, age, diabetes status, and smoking. Analyses were adjusted for age, study year, education, employment status, leisure time physical activity, and body mass index. Results: Out of several type-specific sitting behaviors, only TV sitting showed systematic associations with the Framingham score in both genders. The lowest Framingham risk was found for TV sitting from 6 minutes to less than 1 hour daily. Of other types of sitting, computer sitting was inversely associated with the Framingham risk in men only. Total self-reported sitting time did not show significant associations with the Framingham score, but instead higher objectively assessed sedentary time showed higher Framingham risk in men. Conclusions: TV sitting showed most systematic associations with CVD risk score. This suggests that of all types of SB, reducing TV sitting should be targeted for reducing CVD risk.

  15. Protein alignment algorithms with an efficient backtracking routine on multiple GPUs

    Directory of Open Access Journals (Sweden)

    Kierzynka Michal

    2011-05-01

    Full Text Available Abstract Background Pairwise sequence alignment methods are widely used in biological research. The increasing number of sequences is perceived as one of the upcoming challenges for sequence alignment methods in the nearest future. To overcome this challenge several GPU (Graphics Processing Unit computing approaches have been proposed lately. These solutions show a great potential of a GPU platform but in most cases address the problem of sequence database scanning and computing only the alignment score whereas the alignment itself is omitted. Thus, the need arose to implement the global and semiglobal Needleman-Wunsch, and Smith-Waterman algorithms with a backtracking procedure which is needed to construct the alignment. Results In this paper we present the solution that performs the alignment of every given sequence pair, which is a required step for progressive multiple sequence alignment methods, as well as for DNA recognition at the DNA assembly stage. Performed tests show that the implementation, with performance up to 6.3 GCUPS on a single GPU for affine gap penalties, is very efficient in comparison to other CPU and GPU-based solutions. Moreover, multiple GPUs support with load balancing makes the application very scalable. Conclusions The article shows that the backtracking procedure of the sequence alignment algorithms may be designed to fit in with the GPU architecture. Therefore, our algorithm, apart from scores, is able to compute pairwise alignments. This opens a wide range of new possibilities, allowing other methods from the area of molecular biology to take advantage of the new computational architecture. Performed tests show that the efficiency of the implementation is excellent. Moreover, the speed of our GPU-based algorithms can be almost linearly increased when using more than one graphics card.

  16. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Science.gov (United States)

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  17. Evaluating Carbonate System Algorithms in a Nearshore System: Does Total Alkalinity Matter?

    Science.gov (United States)

    Jones, Jonathan M; Sweet, Julia; Brzezinski, Mark A; McNair, Heather M; Passow, Uta

    2016-01-01

    Ocean acidification is a threat to many marine organisms, especially those that use calcium carbonate to form their shells and skeletons. The ability to accurately measure the carbonate system is the first step in characterizing the drivers behind this threat. Due to logistical realities, regular carbonate system sampling is not possible in many nearshore ocean habitats, particularly in remote, difficult-to-access locations. The ability to autonomously measure the carbonate system in situ relieves many of the logistical challenges; however, it is not always possible to measure the two required carbonate parameters autonomously. Observed relationships between sea surface salinity and total alkalinity can frequently provide a second carbonate parameter thus allowing for the calculation of the entire carbonate system. Here, we assessed the rigor of estimating total alkalinity from salinity at a depth sampling water from a pier in southern California for several carbonate system parameters. Carbonate system parameters based on measured values were compared with those based on estimated TA values. Total alkalinity was not predictable from salinity or from a combination of salinity and temperature at this site. However, dissolved inorganic carbon and the calcium carbonate saturation state of these nearshore surface waters could both be estimated within on average 5% of measured values using measured pH and salinity-derived or regionally averaged total alkalinity. Thus we find that the autonomous measurement of pH and salinity can be used to monitor trends in coastal changes in DIC and saturation state and be a useful method for high-frequency, long-term monitoring of ocean acidification.

  18. The PER (Preoperative Esophagectomy Risk) Score: A Simple Risk Score to Predict Short-Term and Long-Term Outcome in Patients with Surgically Treated Esophageal Cancer.

    Science.gov (United States)

    Reeh, Matthias; Metze, Johannes; Uzunoglu, Faik G; Nentwich, Michael; Ghadban, Tarik; Wellner, Ullrich; Bockhorn, Maximilian; Kluge, Stefan; Izbicki, Jakob R; Vashist, Yogesh K

    2016-02-01

    Esophageal resection in patients with esophageal cancer (EC) is still associated with high mortality and morbidity rates. We aimed to develop a simple preoperative risk score for the prediction of short-term and long-term outcomes for patients with EC treated by esophageal resection. In total, 498 patients suffering from esophageal carcinoma, who underwent esophageal resection, were included in this retrospective cohort study. Three preoperative esophagectomy risk (PER) groups were defined based on preoperative functional evaluation of different organ systems by validated tools (revised cardiac risk index, model for end-stage liver disease score, and pulmonary function test). Clinicopathological parameters, morbidity, and mortality as well as disease-free survival (DFS) and overall survival (OS) were correlated to the PER score. The PER score significantly predicted the short-term outcome of patients with EC who underwent esophageal resection. PER 2 and PER 3 patients had at least double the risk of morbidity and mortality compared to PER 1 patients. Furthermore, a higher PER score was associated with shorter DFS (P PER score was identified as an independent predictor of tumor recurrence (hazard ratio [HR] 2.1; P PER score allows preoperative objective allocation of patients with EC into different risk categories for morbidity, mortality, and long-term outcomes. Thus, multicenter studies are needed for independent validation of the PER score.

  19. Mobile health technology transforms injury severity scoring in South Africa.

    Science.gov (United States)

    Spence, Richard Trafford; Zargaran, Eiman; Hameed, S Morad; Navsaria, Pradeep; Nicol, Andrew

    2016-08-01

    The burden of data collection associated with injury severity scoring has limited its application in areas of the world with the highest incidence of trauma. Since January 2014, electronic records (electronic Trauma Health Records [eTHRs]) replaced all handwritten records at the Groote Schuur Hospital Trauma Unit in South Africa. Data fields required for Glasgow Coma Scale, Revised Trauma Score, Kampala Trauma Score, Injury Severity Score (ISS), and Trauma Score-Injury Severity Score calculations are now prospectively collected. Fifteen months after implementation of eTHR, the injury severity scores were compared as predictors of mortality on three accounts: (1) ability to discriminate (area under receiver operating curve, ROC); (2) ability to calibrate (observed versus expected ratio, O/E); and (3) feasibility of data collection (rate of missing data). A total of 7460 admissions were recorded by eTHR from April 1, 2014 to July 7, 2015, including 770 severely injured patients (ISS > 15) and 950 operations. The mean age was 33.3 y (range 13-94), 77.6% were male, and the mechanism of injury was penetrating in 39.3% of cases. The cohort experienced a mortality rate of 2.5%. Patient reserve predictors required by the scores were 98.7% complete, physiological injury predictors were 95.1% complete, and anatomic injury predictors were 86.9% complete. The discrimination and calibration of Trauma Score-Injury Severity Score was superior for all admissions (ROC 0.9591 and O/E 1.01) and operatively managed patients (ROC 0.8427 and O/E 0.79). In the severely injured cohort, the discriminatory ability of Revised Trauma Score was superior (ROC 0.8315), but no score provided adequate calibration. Emerging mobile health technology enables reliable and sustainable injury severity scoring in a high-volume trauma center in South Africa. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Algorithm For Hypersonic Flow In Chemical Equilibrium

    Science.gov (United States)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  1. An inexpensive optical sensor system for monitoring total suspended solids in water

    International Nuclear Information System (INIS)

    Sami Gumaan Daraigan; Mohd Zubir Matjafri; Khiruddin Abdullah; Azlan Abdul Aziz; Abdul Aziz Tajuddin; Mohd Firdaus Othman

    2005-01-01

    The objectives of this work are to design and develop an optical transsmissometer sensor for measuring total suspended solids TSS concentrations in water samples. The proposed optical sensor has the advantages of being relatively inexpensive, and easy to make and operate. An optical algorithm has been developed and used for the measurement of total suspended solids concentrations. The developed optical sensor components include light emitting diodes LEDs that are used for measuring transmitted light. The concentrations of total suspended solids TSS are determined from transmitted light through the water samples. The transmitted light is measured in terms of the output voltage of the photodetector of the sensor system. The readings are measured using a digital multimeter. The results indicate that the level of the photocurrent is linearly proportional to the total suspended solids concentration. The proposed algorithm produces a high correlation coefficient and low root mean square error. (Author)

  2. Dereplication, Aggregation and Scoring Tool (DAS Tool) v1.0

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-01

    Communities of uncultivated microbes are critical to ecosystem function and microorganism health, and a key objective of metagenomic studies is to analyze organism-specific metabolic pathways and reconstruct community interaction networks. This requires accurate assignment of genes to genomes, yet existing binning methods often fail to predict a reasonable number of genomes and report many bins of low quality and completeness. Furthermore, the performance of existing algorithms varies between samples and biotypes. Here, we present a dereplication, aggregation and scoring strategy, DAS Tool, that combines the strengths of a flexible set of established binning algorithms. DAS Tools applied to a constructed community generated more accurate bins than any automated method. Further, when applied to samples of different complexity, including soil, natural oil seeps, and the human gut, DAS Tool recovered substantially more near-complete genomes than any single binning method alone. Included were three genomes from a novel lineage . The ability to reconstruct many near-complete genomes from metagenomics data will greatly advance genome-centric analyses of ecosystems.

  3. Clinical Course Score (CCS): a new clinical score to evaluate efficacy of neurotrauma treatment in traumatic brain injury and subarachnoid hemorrhage.

    Science.gov (United States)

    Brandner, Sebastian; Kellermann, Isabel; Hore, Nirjhar; Bozhkov, Yavor; Buchfelder, Michael

    2015-01-01

    Neurotrauma continues to represent a challenging public health issue requiring continual improvement in therapeutic approaches. As no such current system exists, we present in this study the Clinical Course Score (CCS) as a new clinical score to evaluate the efficacy of neurotrauma treatment. The CCS was calculated in neurotrauma patients to be the difference between the grade of the Glasgow Outcome Scale 6 months after discharge from our department and the grade of a 1 to 5 point reduced Glasgow Coma Scale on admission. We assessed the CCS in a total of 248 patients (196 traumatic brain injury [TBI] patients and 52 subarachnoid hemorrhage [SAH] patients) who were treated in our Department of Neurosurgery between January 2011 and December 2012. We found negative CCS grades both in mild TBI and in mild SAH patients. In patients with severe TBI or SAH, we found positive CCS grades. In SAH patients, we found higher CCS scores in younger patients compared with elderly subjects in both mild and severe cases. The CCS can be useful in evaluating different therapeutic approaches during neurotrauma therapy. This new score might improve assessment of beneficial effects of therapeutic procedures.

  4. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  5. A new scoring system for predicting survival in patients with non-small cell lung cancer

    International Nuclear Information System (INIS)

    Schild, Steven E; Tan, Angelina D; Wampfler, Jason A; Ross, Helen J; Yang, Ping; Sloan, Jeff A

    2015-01-01

    This analysis was performed to create a scoring system to estimate the survival of patients with non-small cell lung cancer (NSCLC). Data from 1274 NSCLC patients were analyzed to create and validate a scoring system. Univariate (UV) and multivariate (MV) Cox models were used to evaluate the prognostic importance of each baseline factor. Prognostic factors that were significant on both UV and MV analyses were used to develop the score. These included quality of life, age, performance status, primary tumor diameter, nodal status, distant metastases, and smoking cessation. The score for each factor was determined by dividing the 5-year survival rate (%) by 10 and summing these scores to form a total score. MV models and the score were validated using bootstrapping with 1000 iterations from the original samples. The score for each prognostic factor ranged from 1 to 7 points with higher scores reflective of better survival. Total scores (sum of the scores from each independent prognostic factor) of 32–37 correlated with a 5-year survival of 8.3% (95% CI = 0–17.1%), 38–43 correlated with a 5-year survival of 20% (95% CI = 13–27%), 44–47 correlated with a 5-year survival of 48.3% (95% CI = 41.5–55.2%), 48–49 correlated to a 5-year survival of 72.1% (95% CI = 65.6–78.6%), and 50–52 correlated to a 5-year survival of 84.7% (95% CI = 79.6–89.8%). The bootstrap method confirmed the reliability of the score. Prognostic factors significantly associated with survival on both UV and MV analyses were used to construct a valid scoring system that can be used to predict survival of NSCLC patients. Optimally, this score could be used when counseling patients, and designing future trials

  6. Atrial Fibrillation Screening in Nonmetropolitan Areas Using a Telehealth Surveillance System With an Embedded Cloud-Computing Algorithm: Prospective Pilot Study.

    Science.gov (United States)

    Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien; Hwang, Juey-Jen; Ho, Yi-Lwun

    2017-09-26

    Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician's ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. ©Ying-Hsien Chen, Chi-Sheng Hung, Ching-Chang Huang, Yu-Chien Hung, Juey-Jen Hwang, Yi-Lwun Ho. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 26.09.2017.

  7. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  8. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    Science.gov (United States)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  9. Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces

    DEFF Research Database (Denmark)

    Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel

    The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have...

  10. From the SAIN,LIM system to the SENS algorithm: a review of a French approach of nutrient profiling.

    Science.gov (United States)

    Tharrey, Marion; Maillot, Matthieu; Azaïs-Braesco, Véronique; Darmon, Nicole

    2017-08-01

    Nutrient profiling aims to classify or rank foods according to their nutritional composition to assist policies aimed at improving the nutritional quality of foods and diets. The present paper reviews a French approach of nutrient profiling by describing the SAIN,LIM system and its evolution from its early draft to the simplified nutrition labelling system (SENS) algorithm. Considered in 2010 by WHO as the 'French model' of nutrient profiling, SAIN,LIM classifies foods into four classes based on two scores: a nutrient density score (NDS) called SAIN and a score of nutrients to limit called LIM, and one threshold on each score. The system was first developed by the French Food Standard Agency in 2008 in response to the European regulation on nutrition and health claims (European Commission (EC) 1924/2006) to determine foods that may be eligible for bearing claims. Recently, the European regulation (EC 1169/2011) on the provision of food information to consumers allowed simplified nutrition labelling to facilitate consumer information and help them make fully informed choices. In that context, the SAIN,LIM was adapted to obtain the SENS algorithm, a system able to rank foods for simplified nutrition labelling. The implementation of the algorithm followed a step-by-step, systematic, transparent and logical process where shortcomings of the SAIN,LIM were addressed by integrating specificities of food categories in the SENS, reducing the number of nutrients, ordering the four classes and introducing European reference intakes. Through the French example, this review shows how an existing nutrient profiling system can be specifically adapted to support public health nutrition policies.

  11. The impact of a revised EQ-5D population scoring on preference-based utility scores in an inflammatory arthritis cohort.

    LENUS (Irish Health Repository)

    Adams, Roisin

    2012-02-01

    BACKGROUND AND OBJECTIVE: It is well established that there are problems with the EQ-5D. This is due to the original scoring methods used and how negative time trade-off (TTO) values were treated. A revised scoring method has been published. This article applies this to an inflammatory arthritis cohort. The objective is to examine the impact of a revised scoring system for the EQ-5D (UK) TTO on the utility estimates and in the case of rheumatoid arthritis, to explore the impact of using different utility metrics on the incremental cost-effectiveness ratio (ICER) results of an economic model. METHODS: A total of 504 patients with inflammatory arthritis were rescored using revised EQ-5D scoring, which uses an episodic random utility model to deal with negative TTO values. Differences in utility scores were compared and the new mapping coefficients were obtained. These were then used in an economic model to examine the impact on the ICER. RESULTS: In rheumatoid arthritis, the overall change is less for the revised EQ-5D scoring than with the original EQ-5D (TTO) but greater than the SF-6D: EQ-5D UK -0.22 (95% confidence interval [CI] -0.30 to -0.15), revised EQ-5D UK -0.16 (95% CI -0.21 to -0.10) and SF-6D -0.08 (95% CI -0.11 to -0.05). A similar trend is seen in the psoriatic arthritis group. The economic model produced different ICERs, when different utility measures were used; EQ-5D (TTO) euro42,402, SF-6D euro111,788, and revised EQ-5D (TTO) euro57,747. CONCLUSION: In the context of inflammatory arthritis, this article demonstrates that a revised scoring for EQ-5D may have a significant impact on utility estimates and on the output of the economic model.

  12. Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2018-01-01

    Full Text Available Cloud computing environment provides several on-demand services and resource sharing for clients. Business processes are managed using the workflow technology over the cloud, which represents one of the challenges in using the resources in an efficient manner due to the dependencies between the tasks. In this paper, a Hybrid GA-PSO algorithm is proposed to allocate tasks to the resources efficiently. The Hybrid GA-PSO algorithm aims to reduce the makespan and the cost and balance the load of the dependent tasks over the heterogonous resources in cloud computing environments. The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA, WSGA, and MTCT algorithms. Furthermore, it reduces the execution cost. In addition, it improves the load balancing of the workflow application over the available resources. Finally, the obtained results also proved that the proposed algorithm converges to optimal solutions faster and with higher quality compared to other algorithms.

  13. Recurrence risk of low Apgar score among term singletons: a population-based cohort study

    NARCIS (Netherlands)

    Ensing, Sabine; Schaaf, Jelle M.; Abu-Hanna, Ameen; Mol, Ben W. J.; Ravelli, Anita C. J.

    2014-01-01

    To examine the risk of recurrence of low Apgar score in a subsequent term singleton pregnancy. Population-based cohort study. The Netherlands. A total of 190,725 women with two subsequent singleton term live births between 1999 and 2007. We calculated the recurrence risk of low Apgar score after

  14. Fast stochastic algorithm for simulating evolutionary population dynamics

    Science.gov (United States)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  15. The Impact of Misspelled Words on Automated Computer Scoring: A Case Study of Scientific Explanations

    Science.gov (United States)

    Ha, Minsu; Nehm, Ross H.

    2016-06-01

    Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a pervasive feature of human-generated text and that despite improvements, spell-check and auto-replace programs continue to be characterized by significant errors. Our study explored four research questions relating to MSW and text-based computer assessments: (1) Do English language learners (ELLs) produce equivalent magnitudes and types of spelling errors as non-ELLs? (2) To what degree do MSW impact concept-specific computer scoring rules? (3) What impact do MSW have on computer scoring accuracy? and (4) Are MSW more likely to impact false-positive or false-negative feedback to students? We found that although ELLs produced twice as many MSW as non-ELLs, MSW were relatively uncommon in our corpora. The MSW in the corpora were found to be important features of the computer scoring models. Although MSW did not significantly or meaningfully impact computer scoring efficacy across nine different computer scoring models, MSW had a greater impact on the scoring algorithms for naïve ideas than key concepts. Linguistic and concept redundancy in student responses explains the weak connection between MSW and scoring accuracy. Lastly, we found that MSW tend to have a greater impact on false-positive feedback. We discuss the implications of these findings for the development of next-generation science assessments.

  16. Minimizing Total Completion Time For Preemptive Scheduling With Release Dates And Deadline Constraints

    Directory of Open Access Journals (Sweden)

    He Cheng

    2014-02-01

    Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm

  17. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    Science.gov (United States)

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  18. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  19. Z-score linear discriminant analysis for EEG based brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    Full Text Available Linear discriminant analysis (LDA is one of the most popular classification algorithms for brain-computer interfaces (BCI. LDA assumes Gaussian distribution of the data, with equal covariance matrices for the concerned classes, however, the assumption is not usually held in actual BCI applications, where the heteroscedastic class distributions are usually observed. This paper proposes an enhanced version of LDA, namely z-score linear discriminant analysis (Z-LDA, which introduces a new decision boundary definition strategy to handle with the heteroscedastic class distributions. Z-LDA defines decision boundary through z-score utilizing both mean and standard deviation information of the projected data, which can adaptively adjust the decision boundary to fit for heteroscedastic distribution situation. Results derived from both simulation dataset and two actual BCI datasets consistently show that Z-LDA achieves significantly higher average classification accuracies than conventional LDA, indicating the superiority of the new proposed decision boundary definition strategy.

  20. Pregabalin Improves Pain Scores in Patients with Fibromyalgia Irrespective of Comorbid Osteoarthritis.

    Science.gov (United States)

    Argoff, Charles E; Emir, Birol; Whalen, Ed; Ortiz, Marie; Pauer, Lynne; Clair, Andrew

    2016-11-01

    Fibromyalgia (FM) is a chronic pain disorder with patients frequently suffering from comorbid conditions, including osteoarthritis (OA). Data on how FM patients with comorbid OA respond to recommended therapies (such as pregabalin) could help their treatment. This was a pooled exploratory analysis of three randomized placebo-controlled clinical trials of pregabalin in FM patients to assess the impact of comorbid OA on the response to pregabalin. Patients were divided into those with and without comorbid OA. Difference in change in least squares (LS) mean pain score at endpoint (assessed by 0-10 numeric rating scale, controlled for baseline pain score) with pregabalin (300 mg/day and 450 mg/day) vs placebo was assessed. Changes in Patient Global Impression of Change (PGIC) responders and Fibromyalgia Impact Questionnaire (FIQ) total score were also assessed. There were 1665 patients in the analysis set (558, placebo; 552, pregabalin 300 mg/day; 555, pregabalin 450 mg/day), including 296 with comorbid OA. Pregabalin 450 mg/day significantly improved the LS mean (95% confidence interval) difference in pain score vs placebo in patients with (0.99 [0.44, 1.55], P FIQ total score were observed in patients with and without comorbid OA. FM patients with or without comorbid OA respond to treatment with pregabalin 450mg/day with significant improvements in pain intensity scores. These data could provide guidance to healthcare professionals treating these patients. © 2016 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.