WorldWideScience

Sample records for total scoring algorithms

  1. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  2. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  3. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  4. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  5. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  6. a locally adapted functional outcome measurement score for total

    African Journals Online (AJOL)

    Results and success of total hip arthroplasty are often measured using a functional outcome scoring system. Most current scores were developed in Europe and. North America (1-3). During the evaluation of a Total. Hip Replacement (THR) project in Ouagadougou,. Burkina Faso (4) it was felt that these scores were not.

  7. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  8. Computerized scoring algorithms for the Autobiographical Memory Test.

    Science.gov (United States)

    Takano, Keisuke; Gutenbrunner, Charlotte; Martens, Kris; Salmon, Karen; Raes, Filip

    2018-02-01

    Reduced specificity of autobiographical memories is a hallmark of depressive cognition. Autobiographical memory (AM) specificity is typically measured by the Autobiographical Memory Test (AMT), in which respondents are asked to describe personal memories in response to emotional cue words. Due to this free descriptive responding format, the AMT relies on experts' hand scoring for subsequent statistical analyses. This manual coding potentially impedes research activities in big data analytics such as large epidemiological studies. Here, we propose computerized algorithms to automatically score AM specificity for the Dutch (adult participants) and English (youth participants) versions of the AMT by using natural language processing and machine learning techniques. The algorithms showed reliable performances in discriminating specific and nonspecific (e.g., overgeneralized) autobiographical memories in independent testing data sets (area under the receiver operating characteristic curve > .90). Furthermore, outcome values of the algorithms (i.e., decision values of support vector machines) showed a gradient across similar (e.g., specific and extended memories) and different (e.g., specific memory and semantic associates) categories of AMT responses, suggesting that, for both adults and youth, the algorithms well capture the extent to which a memory has features of specific memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Validity and Reliability of the Achilles Tendon Total Rupture Score

    DEFF Research Database (Denmark)

    Ganestam, Ann; Barfod, Kristoffer; Klit, Jakob

    2013-01-01

    study was to validate a Danish translation of the ATRS. The ATRS was translated into Danish according to internationally adopted standards. Of 142 patients, 90 with previous rupture of the Achilles tendon participated in the validity study and 52 in the reliability study. The ATRS showed moderately......The best treatment of acute Achilles tendon rupture remains debated. Patient-reported outcome measures have become cornerstones in treatment evaluations. The Achilles tendon total rupture score (ATRS) has been developed for this purpose but requires additional validation. The purpose of the present...... = .07). The limits of agreement were ±18.53. A strong correlation was found between test and retest (intercorrelation coefficient .908); the standard error of measurement was 6.7, and the minimal detectable change was 18.5. The Danish version of the ATRS showed moderately strong criterion validity...

  10. Hospital Value-Based Purchasing (HVBP) – Total Performance Score

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of hospitals participating in the Hospital VBP Program and their Clinical Process of Care domain scores, Patient Experience of Care dimension scores, and...

  11. A locally adapted functional outcome measurement score for total ...

    African Journals Online (AJOL)

    ... in Europe or North America and seem not optimally suited for a general West ... We introduce a cross-cultural adaptation of the Lequesne index as a new score. ... Keywords: THR, Hip, Africa, Functional score, Hip replacement, Arthroscopy ...

  12. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  13. Validity and reliability of the Achilles tendon total rupture score.

    Science.gov (United States)

    Ganestam, Ann; Barfod, Kristoffer; Klit, Jakob; Troelsen, Anders

    2013-01-01

    The best treatment of acute Achilles tendon rupture remains debated. Patient-reported outcome measures have become cornerstones in treatment evaluations. The Achilles tendon total rupture score (ATRS) has been developed for this purpose but requires additional validation. The purpose of the present study was to validate a Danish translation of the ATRS. The ATRS was translated into Danish according to internationally adopted standards. Of 142 patients, 90 with previous rupture of the Achilles tendon participated in the validity study and 52 in the reliability study. The ATRS showed moderately strong correlations with the physical subscores of the Medical Outcomes Study 36-item Short-Form Health Survey (r = .70 to .75; p questionnaire (r = .71; p validity. For study and follow-up purposes, the ATRS seems reliable for comparisons of groups of patients. Its usability is limited for repeated assessment of individual patients. The development of analysis guidelines would be desirable. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  14. Scoring the full extent of periodontal disease in the dog: development of a total mouth periodontal score (TMPS) system.

    Science.gov (United States)

    Harvey, Colin E; Laster, Larry; Shofer, Frances; Miller, Bonnie

    2008-09-01

    The development of a total mouth periodontal scoring system is described. This system uses methods to score the full extent of gingivitis and periodontitis of all tooth surfaces, weighted by size of teeth, and adjusted by size of dog.

  15. Highlights of TOMS Version 9 Total Ozone Algorithm

    Science.gov (United States)

    Bhartia, Pawan; Haffner, David

    2012-01-01

    The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side

  16. Analysing relations between specific and total liking scores

    DEFF Research Database (Denmark)

    Menichelli, Elena; Kraggerud, Hilde; Olsen, Nina Veflen

    2013-01-01

    The objective of this article is to present a new statistical approach for the study of consumer liking. Total liking data are extended by incorporating liking for specific sensory properties. The approach combines different analyses for the purpose of investigating the most important aspects...... of liking and indicating which products are similarly or differently perceived by which consumers. A method based on the differences between total liking and the specific liking variables is proposed for studying both relative differences among products and individual consumer differences. Segmentation...... is also tested out in order to distinguish consumers with the strongest differences in their liking values. The approach is illustrated by a case study, based on cheese data. In the consumer test consumers were asked to evaluate their total liking, the liking for texture and the liking for odour/taste. (C...

  17. Adaptive Proximal Point Algorithms for Total Variation Image Restoration

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2015-02-01

    Full Text Available Image restoration is a fundamental problem in various areas of imaging sciences. This paper presents a class of adaptive proximal point algorithms (APPA with contraction strategy for total variational image restoration. In each iteration, the proposed methods choose an adaptive proximal parameter matrix which is not necessary symmetric. In fact, there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction. And the inner extrapolation is implemented by an adaptive scheme. By using the framework of contraction method, global convergence result and a convergence rate of O(1/N could be established for the proposed methods. Numerical results are reported to illustrate the efficiency of the APPA methods for solving total variation image restoration problems. Comparisons with the state-of-the-art algorithms demonstrate that the proposed methods are comparable and promising.

  18. Solar Backscatter UV (SBUV total ozone and profile algorithm

    Directory of Open Access Journals (Sweden)

    P. K. Bhartia

    2013-10-01

    Full Text Available We describe the algorithm that has been applied to develop a 42 yr record of total ozone and ozone profiles from eight Solar Backscatter UV (SBUV instruments launched on NASA and NOAA satellites since April 1970. The Version 8 (V8 algorithm was released more than a decade ago and has been in use since then at NOAA to produce their operational ozone products. The current algorithm (V8.6 is basically the same as V8, except for updates to instrument calibration, incorporation of new ozone absorption cross-sections, and new ozone and cloud height climatologies. Since the V8 algorithm has been optimized for deriving monthly zonal mean (MZM anomalies for ozone assessment and model comparisons, our emphasis in this paper is primarily on characterizing the sources of errors that are relevant for such studies. When data are analyzed this way the effect of some errors, such as vertical smoothing of short-term variability, and noise due to clouds and aerosols diminish in importance, while the importance of others, such as errors due to vertical smoothing of the quasi-biennial oscillation (QBO and other periodic and aperiodic variations, become more important. With V8.6 zonal mean data we now provide smoothing kernels that can be used to compare anomalies in SBUV profile and partial ozone columns with models. In this paper we show how to use these kernels to compare SBUV data with Microwave Limb Sounder (MLS ozone profiles. These kernels are particularly useful for comparisons in the lower stratosphere where SBUV profiles have poor vertical resolution but partial column ozone values have high accuracy. We also provide our best estimate of the smoothing errors associated with SBUV MZM profiles. Since smoothing errors are the largest source of uncertainty in these profiles, they can be treated as error bars in deriving interannual variability and trends using SBUV data and for comparing with other measurements. In the V8 and V8.6 algorithms we derive total

  19. Acute Radiation Syndrome Severity Score System in Mouse Total-Body Irradiation Model.

    Science.gov (United States)

    Ossetrova, Natalia I; Ney, Patrick H; Condliffe, Donald P; Krasnopolsky, Katya; Hieber, Kevin P

    2016-08-01

    Radiation accidents or terrorist attacks can result in serious consequences for the civilian population and for military personnel responding to such emergencies. The early medical management situation requires quantitative indications for early initiation of cytokine therapy in individuals exposed to life-threatening radiation doses and effective triage tools for first responders in mass-casualty radiological incidents. Previously established animal (Mus musculus, Macaca mulatta) total-body irradiation (γ-exposure) models have evaluated a panel of radiation-responsive proteins that, together with peripheral blood cell counts, create a multiparametic dose-predictive algorithm with a threshold for detection of ~1 Gy from 1 to 7 d after exposure as well as demonstrate the acute radiation syndrome severity score systems created similar to the Medical Treatment Protocols for Radiation Accident Victims developed by Fliedner and colleagues. The authors present a further demonstration of the acute radiation sickness severity score system in a mouse (CD2F1, males) TBI model (1-14 Gy, Co γ-rays at 0.6 Gy min) based on multiple biodosimetric endpoints. This includes the acute radiation sickness severity Observational Grading System, survival rate, weight changes, temperature, peripheral blood cell counts and radiation-responsive protein expression profile: Flt-3 ligand, interleukin 6, granulocyte-colony stimulating factor, thrombopoietin, erythropoietin, and serum amyloid A. Results show that use of the multiple-parameter severity score system facilitates identification of animals requiring enhanced monitoring after irradiation and that proteomics are a complementary approach to conventional biodosimetry for early assessment of radiation exposure, enhancing accuracy and discrimination index for acute radiation sickness response categories and early prediction of outcome.

  20. Towards a contemporary, comprehensive scoring system for determining technical outcomes of hybrid percutaneous chronic total occlusion treatment: The RECHARGE score.

    Science.gov (United States)

    Maeremans, Joren; Spratt, James C; Knaapen, Paul; Walsh, Simon; Agostoni, Pierfrancesco; Wilson, William; Avran, Alexandre; Faurie, Benjamin; Bressollette, Erwan; Kayaert, Peter; Bagnall, Alan J; Smith, Dave; McEntegart, Margaret B; Smith, William H T; Kelly, Paul; Irving, John; Smith, Elliot J; Strange, Julian W; Dens, Jo

    2018-02-01

    This study sought to create a contemporary scoring tool to predict technical outcomes of chronic total occlusion (CTO) percutaneous coronary intervention (PCI) from patients treated by hybrid operators with differing experience levels. Current scoring systems need regular updating to cope with the positive evolutions regarding materials, techniques, and outcomes, while at the same time being applicable for a broad range of operators. Clinical and angiographic characteristics from 880 CTO-PCIs included in the REgistry of CrossBoss and Hybrid procedures in FrAnce, the NetheRlands, BelGium and UnitEd Kingdom (RECHARGE) were analyzed by using a derivation and validation set (2:1 ratio). Variables significantly associated with technical failure in the multivariable analysis were incorporated in the score. Subsequently, the discriminatory capacity was assessed and the validation set was used to compare with the J-CTO score and PROGRESS scores. Technical success in the derivation and validation sets was 83% and 85%, respectively. Multivariate analysis identified six parameters associated with technical failure: blunt stump (beta coefficient (b) = 1.014); calcification (b = 0.908); tortuosity ≥45° (b = 0.964); lesion length 20 mm (b = 0.556); diseased distal landing zone (b = 0.794), and previous bypass graft on CTO vessel (b = 0.833). Score variables remained significant after bootstrapping. The RECHARGE score showed better discriminatory capacity in both sets (area-under-the-curve (AUC) = 0.783 and 0.711), compared to the J-CTO (AUC = 0.676) and PROGRESS (AUC = 0.608) scores. The RECHARGE score is a novel, easy-to-use tool for assessing the risk for technical failure in hybrid CTO-PCI and has the potential to perform well for a broad community of operators. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. BitPAl: a bit-parallel, general integer-scoring sequence alignment algorithm.

    Science.gov (United States)

    Loving, Joshua; Hernandez, Yozen; Benson, Gary

    2014-11-15

    Mapping of high-throughput sequencing data and other bulk sequence comparison applications have motivated a search for high-efficiency sequence alignment algorithms. The bit-parallel approach represents individual cells in an alignment scoring matrix as bits in computer words and emulates the calculation of scores by a series of logic operations composed of AND, OR, XOR, complement, shift and addition. Bit-parallelism has been successfully applied to the longest common subsequence (LCS) and edit-distance problems, producing fast algorithms in practice. We have developed BitPAl, a bit-parallel algorithm for general, integer-scoring global alignment. Integer-scoring schemes assign integer weights for match, mismatch and insertion/deletion. The BitPAl method uses structural properties in the relationship between adjacent scores in the scoring matrix to construct classes of efficient algorithms, each designed for a particular set of weights. In timed tests, we show that BitPAl runs 7-25 times faster than a standard iterative algorithm. Source code is freely available for download at http://lobstah.bu.edu/BitPAl/BitPAl.html. BitPAl is implemented in C and runs on all major operating systems. jloving@bu.edu or yhernand@bu.edu or gbenson@bu.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  2. CERAD Neuropsychological Total Scores Reflect Cortical Thinning in Prodromal Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    T. Paajanen

    2013-11-01

    Full Text Available Background: Sensitive cognitive global scores are beneficial in screening and monitoring for prodromal Alzheimer's disease (AD. Early cortical changes provide a novel opportunity for validating established cognitive total scores against the biological disease markers. Methods: We examined how two different total scores of the Consortium to Establish a Registry for Alzheimer's Disease (CERAD battery and the Mini-Mental State Examination (MMSE are associated with cortical thickness (CTH in mild cognitive impairment (MCI and prodromal AD. Cognitive and magnetic resonance imaging (MRI data of 22 progressive MCI, 78 stable MCI, and 98 control subjects, and MRI data of 103 AD patients of the prospective multicenter study were analyzed. Results: CERAD total scores correlated with mean CTH more strongly (r = 0.34-0.38, p Conclusion: CERAD total scores are sensitive to the CTH signature of prodromal AD, which supports their biological validity in detecting early disease-related cognitive changes.

  3. Systematic Analysis of Painful Total Knee Prosthesis, a Diagnostic Algorithm

    Directory of Open Access Journals (Sweden)

    Oliver Djahani

    2013-12-01

    Full Text Available   Remaining pain after total knee arthroplasty (TKA is a common observation in about 20% of postoperative patients; where in about 60% of these knees require early revision surgery within five years. Obvious causes of this pain could be identified simply with clinical examinations and standard radiographs. However, unexplained painful TKA still remains a challenge for the surgeon. The management should include a multidisciplinary approach to the patient`s pain as well as addressing the underlying etiology. There are a number of extrinsic (tendinopathy, hip, ankle, spine, CRPS and so on and intrinsic (infection, instability, malalignment, wear and so on causes of painful knee replacement. On average, diagnosis takes more than 12 months and patients become very dissatisfied and some of them even acquire psychological problems. Hence, a systematic diagnostic algorithm might be helpful. This review article aims to act as a guide to the evaluation of patients with painful TKA described in 10 different steps. Furthermore, the preliminary results of a series of 100 consecutive cases will be discussed. Revision surgery was performed only in those cases with clear failure mechanism.

  4. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    Science.gov (United States)

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  5. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  6. Automatic Algorithm for the Determination of the Anderson-wilkins Acuteness Score In Patients With St Elevation Myocardial Infarction

    DEFF Research Database (Denmark)

    Fakhri, Yama; Sejersten, Maria; Schoos, Mikkel Malby

    2016-01-01

    using 50 ECGs. Each ECG lead (except aVR) was manually scored according to AW-score by two independent experts (Exp1 and Exp2) and automatically by our designed algorithm (auto-score). An adjudicated manual score (Adj-score) was determined between Exp1 and Exp2. The inter-rater reliabilities (IRRs...

  7. Empirical validation of the S-Score algorithm in the analysis of gene expression data

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2006-03-01

    Full Text Available Abstract Background Current methods of analyzing Affymetrix GeneChip® microarray data require the estimation of probe set expression summaries, followed by application of statistical tests to determine which genes are differentially expressed. The S-Score algorithm described by Zhang and colleagues is an alternative method that allows tests of hypotheses directly from probe level data. It is based on an error model in which the detected signal is proportional to the probe pair signal for highly expressed genes, but approaches a background level (rather than 0 for genes with low levels of expression. This model is used to calculate relative change in probe pair intensities that converts probe signals into multiple measurements with equalized errors, which are summed over a probe set to form the S-Score. Assuming no expression differences between chips, the S-Score follows a standard normal distribution, allowing direct tests of hypotheses to be made. Using spike-in and dilution datasets, we validated the S-Score method against comparisons of gene expression utilizing the more recently developed methods RMA, dChip, and MAS5. Results The S-score showed excellent sensitivity and specificity in detecting low-level gene expression changes. Rank ordering of S-Score values more accurately reflected known fold-change values compared to other algorithms. Conclusion The S-score method, utilizing probe level data directly, offers significant advantages over comparisons using only probe set expression summaries.

  8. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    Science.gov (United States)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  9. Totally opportunistic routing algorithm (TORA) for underwater wireless sensor network.

    Science.gov (United States)

    Rahman, Ziaur; Hashim, Fazirulhisyam; Rasid, Mohd Fadlee A; Othman, Mohamed

    2018-01-01

    Underwater Wireless Sensor Network (UWSN) has emerged as promising networking techniques to monitor and explore oceans. Research on acoustic communication has been conducted for decades, but had focused mostly on issues related to physical layer such as high latency, low bandwidth, and high bit error. However, data gathering process is still severely limited in UWSN due to channel impairment. One way to improve data collection in UWSN is the design of routing protocol. Opportunistic Routing (OR) is an emerging technique that has the ability to improve the performance of wireless network, notably acoustic network. In this paper, we propose an anycast, geographical and totally opportunistic routing algorithm for UWSN, called TORA. Our proposed scheme is designed to avoid horizontal transmission, reduce end to end delay, overcome the problem of void nodes and maximize throughput and energy efficiency. We use TOA (Time of Arrival) and range based equation to localize nodes recursively within a network. Once nodes are localized, their location coordinates and residual energy are used as a matrix to select the best available forwarder. All data packets may or may not be acknowledged based on the status of sender and receiver. Thus, the number of acknowledgments for a particular data packet may vary from zero to 2-hop. Extensive simulations were performed to evaluate the performance of the proposed scheme for high network traffic load under very sparse and very dense network scenarios. Simulation results show that TORA significantly improves the network performance when compared to some relevant existing routing protocols, such as VBF, HHVBF, VAPR, and H2DAB, for energy consumption, packet delivery ratio, average end-to-end delay, average hop-count and propagation deviation factor. TORA reduces energy consumption by an average of 35% of VBF, 40% of HH-VBF, 15% of VAPR, and 29% of H2DAB, whereas the packet delivery ratio has been improved by an average of 43% of VBF, 26

  10. Totally opportunistic routing algorithm (TORA) for underwater wireless sensor network

    Science.gov (United States)

    Hashim, Fazirulhisyam; Rasid, Mohd Fadlee A.; Othman, Mohamed

    2018-01-01

    Underwater Wireless Sensor Network (UWSN) has emerged as promising networking techniques to monitor and explore oceans. Research on acoustic communication has been conducted for decades, but had focused mostly on issues related to physical layer such as high latency, low bandwidth, and high bit error. However, data gathering process is still severely limited in UWSN due to channel impairment. One way to improve data collection in UWSN is the design of routing protocol. Opportunistic Routing (OR) is an emerging technique that has the ability to improve the performance of wireless network, notably acoustic network. In this paper, we propose an anycast, geographical and totally opportunistic routing algorithm for UWSN, called TORA. Our proposed scheme is designed to avoid horizontal transmission, reduce end to end delay, overcome the problem of void nodes and maximize throughput and energy efficiency. We use TOA (Time of Arrival) and range based equation to localize nodes recursively within a network. Once nodes are localized, their location coordinates and residual energy are used as a matrix to select the best available forwarder. All data packets may or may not be acknowledged based on the status of sender and receiver. Thus, the number of acknowledgments for a particular data packet may vary from zero to 2-hop. Extensive simulations were performed to evaluate the performance of the proposed scheme for high network traffic load under very sparse and very dense network scenarios. Simulation results show that TORA significantly improves the network performance when compared to some relevant existing routing protocols, such as VBF, HHVBF, VAPR, and H2DAB, for energy consumption, packet delivery ratio, average end-to-end delay, average hop-count and propagation deviation factor. TORA reduces energy consumption by an average of 35% of VBF, 40% of HH-VBF, 15% of VAPR, and 29% of H2DAB, whereas the packet delivery ratio has been improved by an average of 43% of VBF, 26

  11. Total hip arthroplasty outcomes assessment using functional and radiographic scores to compare canine systems.

    Science.gov (United States)

    Iwata, D; Broun, H C; Black, A P; Preston, C A; Anderson, G I

    2008-01-01

    A retrospective multi-centre study was carried out in order to compare outcomes between cemented and uncemented total hip arthoplasties (THA). A quantitative orthopaedic outcome assessment scoring system was devised in order to relate functional outcome to a numerical score, to allow comparison between treatments and amongst centres. The system combined a radiographic score and a clinical score. Lower scores reflect better outcomes than higher scores. Consecutive cases of THA were included from two specialist practices between July 2002 and December 2005. The study included 46 THA patients (22 uncemented THA followed for 8.3 +/- 4.7M and 24 cemented THA for 26.0 +/- 15.7M) with a mean age of 4.4 +/- 3.3 years at surgery. Multi-variable linear and logistical regression analyses were performed with adjustments for age at surgery, surgeon, follow-up time, uni- versus bilateral disease, gender and body weight. The differences between treatment groups in terms of functional scores or total scores were not significant (p > 0.05). Radiographic scores were different between treatment groups. However, these scores were usually assessed within two months of surgery and proved unreliable predictors of functional outcome (p > 0.05). The findings reflect relatively short-term follow-up, especially for the uncemented group, and do not include clinician-derived measures, such as goniometry and thigh circumference. Longer-term follow-up for the radiographic assessments is essential. A prospective study including the clinician-derived outcomes needs to be performed in order to validate the outcome instrument in its modified form.

  12. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  13. Coping strategies related to total stress score among post graduate medical students and residents

    Directory of Open Access Journals (Sweden)

    R. Irawati Ismail

    2013-05-01

    several dominant coping strategies related to total stress score levels.Methods:A cross-sectional purposive sampling method study among postgraduate medical students of the Faculty of Medicine, Universitas Indonesia was done April-July 2011. We used a coping strategies questionnaire and the WHO SRQ-20. Linear regression was used to identify dominant coping strategies related to stress levels.Results:This study had 272 subjects, aged 23-47 years. Four items decreased the total stress score (accepting the reality of the fact, talking to someone who could do something, seeking God’s help, and laughing about the situation. However, three factors increased the total stress score (taking one step at a time has to be done, talking to someone to find out more about the situation, and admitting can’t deal solving the situation. One point of accepting the reality of the situation reduced 0.493 points the total stress score [regression coefficient (β= -0.493; P=0.002]. While one point seeking God’s help reduced 0.307 points the total stress score (β= -0.307; P=0.056. However, one point of doing one step at a time increased 0.54 point the total stress score (β=0.540; P=0.005.Conclusions: Accepting the reality of the situation, talking to someone who could do something, seeking God’s help, and laughing about the situation decreased the stress level. However, taking one step at a time, talking to someone to find out more about the situation and admitting can’t deal solving the situation, increased the total stress score.Key words:stress level, coping strategies, age, seeking God’s help

  14. Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.

    Science.gov (United States)

    Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A

    2016-03-01

    Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.

  15. Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss

    NARCIS (Netherlands)

    Susyanto, N.; Veldhuis, R.N.J.; Spreeuwers, L.J.; Klaassen, C.A.J.; Fierrez, J.; Li, S.Z.; Ross, A.; Veldhuis, R.; Alonso-Fernandez, F.; Bigun, J.

    2016-01-01

    We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its

  16. Cross-cultural adaptation and validation of Persian Achilles tendon Total Rupture Score.

    Science.gov (United States)

    Ansari, Noureddin Nakhostin; Naghdi, Soofia; Hasanvand, Sahar; Fakhari, Zahra; Kordi, Ramin; Nilsson-Helander, Katarina

    2016-04-01

    To cross-culturally adapt the Achilles tendon Total Rupture Score (ATRS) to Persian language and to preliminary evaluate the reliability and validity of a Persian ATRS. A cross-sectional and prospective cohort study was conducted to translate and cross-culturally adapt the ATRS to Persian language (ATRS-Persian) following steps described in guidelines. Thirty patients with total Achilles tendon rupture and 30 healthy subjects participated in this study. Psychometric properties of floor/ceiling effects (responsiveness), internal consistency reliability, test-retest reliability, standard error of measurement (SEM), smallest detectable change (SDC), construct validity, and discriminant validity were tested. Factor analysis was performed to determine the ATRS-Persian structure. There were no floor or ceiling effects that indicate the content and responsiveness of ATRS-Persian. Internal consistency was high (Cronbach's α 0.95). Item-total correlations exceeded acceptable standard of 0.3 for the all items (0.58-0.95). The test-retest reliability was excellent [(ICC)agreement 0.98]. SEM and SDC were 3.57 and 9.9, respectively. Construct validity was supported by a significant correlation between the ATRS-Persian total score and the Persian Foot and Ankle Outcome Score (PFAOS) total score and PFAOS subscales (r = 0.55-0.83). The ATRS-Persian significantly discriminated between patients and healthy subjects. Explanatory factor analysis revealed 1 component. The ATRS was cross-culturally adapted to Persian and demonstrated to be a reliable and valid instrument to measure functional outcomes in Persian patients with Achilles tendon rupture. II.

  17. An Overview of the Total Lightning Jump Algorithm: Past, Present and Future Work

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.; Deierling, Wiebke; Kessinger, Cathy

    2011-01-01

    Rapid increases in total lightning prior to the onset of severe and hazardous weather have been observed for several decades. These rapid increases are known as lightning jumps and can precede the occurrence of severe weather by tens of minutes. Over the past decade, a significant effort has been made to quantify lightning jump behavior in relation to its utility as a predictor of severe and hazardous weather. Based on a study of 34 thunderstorms that occurred in the Tennessee Valley, early work conducted in our group at Huntsville determined that it was indeed possible to create a reasonable operational lightning jump algorithm (LJA) based on a statistical framework relying on the variance behavior of the lightning trending signal. We the expanded this framework and tested several variance-related LJA configurations on a much larger sample of 87 severe and non severe thunderstorms. This study determined that a configuration named the "2(sigma)" algorithm had the most promise in development of the operational LJA with a probability of detection (POD) of 87%, a false alarm rate (FAR) of 33%, a Heidke Skill Score (HSS) of 0.75. The 2(sigma) algorithm was then tested on an even larger sample of 711 thunderstorms of all types from four regions of the country where total lightning measurement capability existed. The result was very encouraging.Despite the larger number of storms and the inclusion of different regions of the country, the POD remained high (79%), the FAR was low (36%) and HSS was solid (0.71). Average lead time from jump to severe weather occurrence was 20.65 minutes, with a standard deviation of +/- 15 minutes. Also, trends in total lightning were compared to cloud to ground (CG) lightning trends, and it was determined that total lightning trends had a higher POD (79% vs 66%), lower FAR (36% vs 54 %) and a better HSS (0.71 vs 0.55). From the 711-storm case study it was determined that a majority of missed events were due to severe weather producing

  18. An algorithm for total variation regularized photoacoustic imaging

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Görner, Torsten; Kunis, Stefan

    2014-01-01

    Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During the iter......Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During...... the iteration, each matrix vector multiplication is realized in an efficient way using a recently proposed spectral discretization of the spherical mean value operator. All theoretical results are illustrated by numerical experiments....

  19. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  20. Fall Risk Score at the Time of Discharge Predicts Readmission Following Total Joint Arthroplasty.

    Science.gov (United States)

    Ravi, Bheeshma; Nan, Zhang; Schwartz, Adam J; Clarke, Henry D

    2017-07-01

    Readmission among Medicare recipients is a leading driver of healthcare expenditure. To date, most predictive tools are too coarse for direct clinical application. Our objective in this study is to determine if a pre-existing tool to identify patients at increased risk for inpatient falls, the Hendrich Fall Risk Score, could be used to accurately identify Medicare patients at increased risk for readmission following arthroplasty, regardless of whether the readmission was due to a fall. This study is a retrospective cohort study. We identified 2437 Medicare patients who underwent a primary elective total joint arthroplasty (TJA) of the hip or knee for osteoarthritis between 2011 and 2014. The Hendrich Fall Risk score was recorded for each patient preoperatively and postoperatively. Our main outcome measure was hospital readmission within 30 days of discharge. Of 2437 eligible TJA recipients, there were 226 (9.3%) patients who had a score ≥6. These patients were more likely to have an unplanned readmission (unadjusted odds ratio 2.84, 95% confidence interval 1.70-4.76, P 3 days (49.6% vs 36.6%, P = .0001), and were less likely to be sent home after discharge (20.8% vs 35.8%, P fall risk score after TJA is strongly associated with unplanned readmission. Application of this tool will allow hospitals to identify these patients and plan their discharge. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Actigraphy-based sleep estimation in adolescents and adults: a comparison with polysomnography using two scoring algorithms

    Directory of Open Access Journals (Sweden)

    Quante M

    2018-01-01

    sensitive (0.88–0.96 to detect sleep, but less specific (0.35–0.64 to detect wake than the Sadeh algorithm (sensitivity: 0.82–0.91, specificity: 0.47–0.68. Total sleep time measured using the GT3X+ with both algorithms was similar to that obtained by PSG (ICC=0.64–0.88. In contrast, agreement between the GT3X+ and PSG wake after sleep onset was poor (ICC=0.00–0.10. In adults, the GT3X+ using the Cole–Kripke algorithm provided data comparable to the AWS (mean bias=3.7±19.7 minutes for total sleep time and 8.0±14.2 minutes for wake after sleep onset.Conclusion: The two actigraphs provided comparable and accurate data compared to PSG, although both poorly identified wake episodes (i.e., had low specificity. Use of actigraphy scoring algorithm influenced the mean bias and level of agreement in sleep–wake times estimates. The GT3X+, when analyzed by the Cole–Kripke, but not the Sadeh algorithm, provided comparable data to the AWS. Keywords: validation, actigraphy, polysomnography, scoring algorithm  

  2. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  3. Total Mini-Mental State Examination score and regional cerebral blood flow using Z score imaging and automated ROI analysis software in subjects with memory impairment

    International Nuclear Information System (INIS)

    Ikeda, Eiji; Shiozaki, Kazumasa; Takahashi, Nobukazu; Togo, Takashi; Odawara, Toshinari; Oka, Takashi; Inoue, Tomio; Hirayasu, Yoshio

    2008-01-01

    The Mini-Mental State Examination (MMSE) is considered a useful supplementary method to diagnose dementia and evaluate the severity of cognitive disturbance. However, the region of the cerebrum that correlates with the MMSE score is not clear. Recently, a new method was developed to analyze regional cerebral blood flow (rCBF) using a Z score imaging system (eZIS). This system shows changes of rCBF when compared with a normal database. In addition, a three-dimensional stereotaxic region of interest (ROI) template (3DSRT), fully automated ROI analysis software was developed. The objective of this study was to investigate the correlation between rCBF changes and total MMSE score using these new methods. The association between total MMSE score and rCBF changes was investigated in 24 patients (mean age±standard deviation (SD) 71.5±9.2 years; 6 men and 18 women) with memory impairment using eZIS and 3DSRT. Step-wise multiple regression analysis was used for multivariate analysis, with the total MMSE score as the dependent variable and rCBF change in 24 areas as the independent variable. Total MMSE score was significantly correlated only with the reduction of left hippocampal perfusion but not with right (P<0.01). Total MMSE score is an important indicator of left hippocampal function. (author)

  4. Do Press Ganey Scores Correlate With Total Knee Arthroplasty-Specific Outcome Questionnaires in Postsurgical Patients?

    Science.gov (United States)

    Chughtai, Morad; Patel, Nirav K; Gwam, Chukwuweike U; Khlopas, Anton; Bonutti, Peter M; Delanois, Ronald E; Mont, Michael A

    2017-09-01

    The purpose of this study was to assess whether Center for Medicaid and Medicare services-implemented satisfaction (Press Ganey [PG]) survey results correlate with established total knee arthroplasty (TKA) assessment tools. Data from 736 patients who underwent TKA and received a PG survey between November 2009 and January 2015 were analyzed. The PG survey overall hospital rating scores were correlated with standardized validated outcome assessment tools for TKA (Short form-12 and 36 Health Survey; Knee Society Score; Western Ontario and McMaster Universities Arthritis Index; University of California, Los Angeles; and visual analog scale) at a mean follow-up of 1154 days post-TKA. There was no correlation between PG survey overall hospital rating score and the above-mentioned outcome assessment tools. Our study shows that there is no statistically significant relationship between established arthroplasty assessment tools and the PG overall hospital rating. Therefore, PG surveys may not be an appropriate tool to determine reimbursement for orthopedists performing TKAs. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. A polynomial time algorithm for checking regularity of totally normed process algebra

    NARCIS (Netherlands)

    Yang, F.; Huang, H.

    2015-01-01

    A polynomial algorithm for the regularity problem of weak and branching bisimilarity on totally normed process algebra (PA) processes is given. Its time complexity is O(n 3 +mn) O(n3+mn), where n is the number of transition rules and m is the maximal length of the rules. The algorithm works for

  6. Fractional-Order Total Variation Image Restoration Based on Primal-Dual Algorithm

    OpenAIRE

    Chen, Dali; Chen, YangQuan; Xue, Dingyu

    2013-01-01

    This paper proposes a fractional-order total variation image denoising algorithm based on the primal-dual method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, convergence rate, and blocky effect. The fractional-order total variation model is introduced by generalizing the first-order model, and the corresponding saddle-point and dual formulation are constructed in theory. In order to guarantee $O(1/{N}^{2})$ conv...

  7. Reliability and validation of the Dutch Achilles tendon Total Rupture Score.

    Science.gov (United States)

    Opdam, K T M; Zwiers, R; Wiegerinck, J I; Kleipool, A E B; Haverlag, R; Goslings, J C; van Dijk, C N

    2018-03-01

    Patient-reported outcome measures (PROMs) have become a cornerstone for the evaluation of the effectiveness of treatment. The Achilles tendon Total Rupture Score (ATRS) is a PROM for outcome and assessment of an Achilles tendon rupture. The aim of this study was to translate the ATRS to Dutch and evaluate its reliability and validity in the Dutch population. A forward-backward translation procedure was performed according to the guidelines of cross-cultural adaptation process. The Dutch ATRS was evaluated for reliability and validity in patients treated for a total Achilles tendon rupture from 1 January 2012 to 31 December 2014 in one teaching hospital and one academic hospital. Reliability was assessed by the intraclass correlation coefficients (ICC), Cronbach's alpha and minimal detectable change (MDC). We assessed construct validity by calculation of Spearman's rho correlation coefficient with domains of the Foot and Ankle Outcome Score (FAOS), Victorian Institute of Sports Assessment-Achilles questionnaire (VISA-A) and Numeric Rating Scale (NRS) for pain in rest and during running. The Dutch ATRS had a good test-retest reliability (ICC = 0.852) and a high internal consistency (Cronbach's alpha = 0.96). MDC was 30.2 at individual level and 3.5 at group level. Construct validity was supported by 75 % of the hypothesized correlations. The Dutch ATRS had a strong correlation with NRS for pain during running (r = -0.746) and all the five subscales of the Dutch FAOS (r = 0.724-0.867). There was a moderate correlation with the VISA-A-NL (r = 0.691) and NRS for pain in rest (r = -0.580). The Dutch ATRS shows an adequate reliability and validity and can be used in the Dutch population for measuring the outcome of treatment of a total Achilles tendon rupture and for research purposes. Diagnostic study, Level I.

  8. The novel EuroSCORE II algorithm predicts the hospital mortality of thoracic aortic surgery in 461 consecutive Japanese patients better than both the original additive and logistic EuroSCORE algorithms.

    Science.gov (United States)

    Nishida, Takahiro; Sonoda, Hiromichi; Oishi, Yasuhisa; Tanoue, Yoshihisa; Nakashima, Atsuhiro; Shiokawa, Yuichi; Tominaga, Ryuji

    2014-04-01

    The European System for Cardiac Operative Risk Evaluation (EuroSCORE) II was developed to improve the overestimation of surgical risk associated with the original (additive and logistic) EuroSCOREs. The purpose of this study was to evaluate the significance of the EuroSCORE II by comparing its performance with that of the original EuroSCOREs in Japanese patients undergoing surgery on the thoracic aorta. We have calculated the predicted mortalities according to the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II algorithms in 461 patients who underwent surgery on the thoracic aorta during a period of 20 years (1993-2013). The actual in-hospital mortality rates in the low- (additive EuroSCORE of 3-6), moderate- (7-11) and high-risk (≥11) groups (followed by overall mortality) were 1.3, 6.2 and 14.4% (7.2% overall), respectively. Among the three different risk groups, the expected mortality rates were 5.5 ± 0.6, 9.1 ± 0.7 and 13.5 ± 0.2% (9.5 ± 0.1% overall) by the additive EuroSCORE algorithm, 5.3 ± 0.1, 16 ± 0.4 and 42.4 ± 1.3% (19.9 ± 0.7% overall) by the logistic EuroSCORE algorithm and 1.6 ± 0.1, 5.2 ± 0.2 and 18.5 ± 1.3% (7.4 ± 0.4% overall) by the EuroSCORE II algorithm, indicating poor prediction (P algorithms were 0.6937, 0.7169 and 0.7697, respectively. Thus, the mortality expected by the EuroSCORE II more closely matched the actual mortality in all three risk groups. In contrast, the mortality expected by the logistic EuroSCORE overestimated the risks in the moderate- (P = 0.0002) and high-risk (P < 0.0001) patient groups. Although all of the original EuroSCOREs and EuroSCORE II appreciably predicted the surgical mortality for thoracic aortic surgery in Japanese patients, the EuroSCORE II best predicted the mortalities in all risk groups.

  9. MRI quantitative assessment of brain maturation and prognosis in premature infants using total maturation score

    International Nuclear Information System (INIS)

    Qi Ying; Wang Xiaoming

    2009-01-01

    Objective: To quantitatively assess brain maturation and prognosis in premature infants on conventional MRI using total maturation score (TMS). Methods: Nineteen cases of sequelae of white matter damage (WMD group )and 21 cases of matched controls (control group) in premature infants confirmed by MRI examinations were included in the study. All cases underwent conventional MR imaging approximately during the perinatal period after birth. Brain development was quantitatively assessed using Childs AM's validated scoring system of TMS by two sophisticated radiology physicians. Interobserver agreement and reliability was evaluated by using intraclass correlation (ICC). Linear regression analysis between TMS and postmenstrual age (PMA) was made(Y: TMS, X: PMA). Independent-sample t test of the two groups' TMS was made. Results: Sixteen of 19 cases revealed MRI abnormalities. Lesions showing T 1 and T 2 shortening tended to occur in clusters or a linear pattern in the deep white matter of the centrum semiovale, periventricular white matter. Diffusion-weighted MR image (DWI) showed 3 cases with greater lesions and 4 cases with new lesions in corpus callosum. There was no abnormality in control group on MRI and DWI. The average numbers of TMS between the two observers were 7.13±2.27, 7.13±2.21. Interobservcer agreement was found to be high (ICC=0.990, P 2 =0.6401,0.5156 respectively, P 0.05). Conclusion: Conventional MRI is able to quantify the brain maturation and prognosis of premature infants using TMS. (authors)

  10. Fast index based algorithms and software for matching position specific scoring matrices

    Directory of Open Access Journals (Sweden)

    Homann Robert

    2006-08-01

    Full Text Available Abstract Background In biological sequence analysis, position specific scoring matrices (PSSMs are widely used to represent sequence motifs in nucleotide as well as amino acid sequences. Searching with PSSMs in complete genomes or large sequence databases is a common, but computationally expensive task. Results We present a new non-heuristic algorithm, called ESAsearch, to efficiently find matches of PSSMs in large databases. Our approach preprocesses the search space, e.g., a complete genome or a set of protein sequences, and builds an enhanced suffix array that is stored on file. This allows the searching of a database with a PSSM in sublinear expected time. Since ESAsearch benefits from small alphabets, we present a variant operating on sequences recoded according to a reduced alphabet. We also address the problem of non-comparable PSSM-scores by developing a method which allows the efficient computation of a matrix similarity threshold for a PSSM, given an E-value or a p-value. Our method is based on dynamic programming and, in contrast to other methods, it employs lazy evaluation of the dynamic programming matrix. We evaluated algorithm ESAsearch with nucleotide PSSMs and with amino acid PSSMs. Compared to the best previous methods, ESAsearch shows speedups of a factor between 17 and 275 for nucleotide PSSMs, and speedups up to factor 1.8 for amino acid PSSMs. Comparisons with the most widely used programs even show speedups by a factor of at least 3.8. Alphabet reduction yields an additional speedup factor of 2 on amino acid sequences compared to results achieved with the 20 symbol standard alphabet. The lazy evaluation method is also much faster than previous methods, with speedups of a factor between 3 and 330. Conclusion Our analysis of ESAsearch reveals sublinear runtime in the expected case, and linear runtime in the worst case for sequences not shorter than |A MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92Aae

  11. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  12. Negative emotions affect postoperative scores for evaluating functional knee recovery and quality of life after total knee replacement

    Directory of Open Access Journals (Sweden)

    A. Qi

    2016-01-01

    Full Text Available This study aimed to determine whether psychological factors affect health-related quality of life (HRQL and recovery of knee function in total knee replacement (TKR patients. A total of 119 TKR patients (male: 38; female: 81 completed the Beck Anxiety Inventory (BAI, Beck Depression Inventory (BDI, State Trait Anxiety Inventory (STAI, Eysenck Personality Questionnaire-revised (EPQR-S, Knee Society Score (KSS, and HRQL (SF-36. At 1 and 6 months after surgery, anxiety, depression, and KSS scores in TKR patients were significantly better compared with those preoperatively (P<0.05. SF-36 scores at the sixth month after surgery were significantly improved compared with preoperative scores (P<0.001. Preoperative Physical Component Summary Scale (PCS and Mental Component Summary Scale (MCS scores were negatively associated with extraversion (E score (B=-0.986 and -0.967, respectively, both P<0.05. Postoperative PCS and State Anxiety Inventory (SAI scores were negatively associated with neuroticism (N score; B=-0.137 and -0.991, respectively, both P<0.05. Postoperative MCS, SAI, Trait Anxiety Inventory (TAI, and BAI scores were also negatively associated with the N score (B=-0.367, -0.107, -0.281, and -0.851, respectively, all P<0.05. The KSS function score at the sixth month after surgery was negatively associated with TAI and N scores (B=-0.315 and -0.532, respectively, both P<0.05, but positively associated with the E score (B=0.215, P<0.05. The postoperative KSS joint score was positively associated with postoperative PCS (B=0.356, P<0.05. In conclusion, for TKR patients, the scores used for evaluating recovery of knee function and HRQL after 6 months are inversely associated with the presence of negative emotions.

  13. Optical Algorithms at Satellite Wavelengths for Total Suspended Matter in Tropical Coastal Waters

    Science.gov (United States)

    Ouillon, Sylvain; Douillet, Pascal; Petrenko, Anne; Neveux, Jacques; Dupouy, Cécile; Froidefond, Jean-Marie; Andréfouët, Serge; Muñoz-Caravaca, Alain

    2008-01-01

    Is it possible to derive accurately Total Suspended Matter concentration or its proxy, turbidity, from remote sensing data in tropical coastal lagoon waters? To investigate this question, hyperspectral remote sensing reflectance, turbidity and chlorophyll pigment concentration were measured in three coral reef lagoons. The three sites enabled us to get data over very diverse environments: oligotrophic and sediment-poor waters in the southwest lagoon of New Caledonia, eutrophic waters in the Cienfuegos Bay (Cuba), and sediment-rich waters in the Laucala Bay (Fiji). In this paper, optical algorithms for turbidity are presented per site based on 113 stations in New Caledonia, 24 stations in Cuba and 56 stations in Fiji. Empirical algorithms are tested at satellite wavebands useful to coastal applications. Global algorithms are also derived for the merged data set (193 stations). The performances of global and local regression algorithms are compared. The best one-band algorithms on all the measurements are obtained at 681 nm using either a polynomial or a power model. The best two-band algorithms are obtained with R412/R620, R443/R670 and R510/R681. Two three-band algorithms based on Rrs620.Rrs681/Rrs412 and Rrs620.Rrs681/Rrs510 also give fair regression statistics. Finally, we propose a global algorithm based on one or three bands: turbidity is first calculated from Rrs681 and then, if < 1 FTU, it is recalculated using an algorithm based on Rrs620.Rrs681/Rrs412. On our data set, this algorithm is suitable for the 0.2-25 FTU turbidity range and for the three sites sampled (mean bias: 3.6 %, rms: 35%, mean quadratic error: 1.4 FTU). This shows that defining global empirical turbidity algorithms in tropical coastal waters is at reach. PMID:27879929

  14. Optical Algorithms at Satellite Wavelengths for Total Suspended Matter in Tropical Coastal Waters

    Directory of Open Access Journals (Sweden)

    Alain Muñoz-Caravaca

    2008-07-01

    Full Text Available Is it possible to derive accurately Total Suspended Matter concentration or its proxy, turbidity, from remote sensing data in tropical coastal lagoon waters? To investigate this question, hyperspectral remote sensing reflectance, turbidity and chlorophyll pigment concentration were measured in three coral reef lagoons. The three sites enabled us to get data over very diverse environments: oligotrophic and sediment-poor waters in the southwest lagoon of New Caledonia, eutrophic waters in the Cienfuegos Bay (Cuba, and sediment-rich waters in the Laucala Bay (Fiji. In this paper, optical algorithms for turbidity are presented per site based on 113 stations in New Caledonia, 24 stations in Cuba and 56 stations in Fiji. Empirical algorithms are tested at satellite wavebands useful to coastal applications. Global algorithms are also derived for the merged data set (193 stations. The performances of global and local regression algorithms are compared. The best one-band algorithms on all the measurements are obtained at 681 nm using either a polynomial or a power model. The best two-band algorithms are obtained with R412/R620, R443/R670 and R510/R681. Two three-band algorithms based on Rrs620.Rrs681/Rrs412 and Rrs620.Rrs681/Rrs510 also give fair regression statistics. Finally, we propose a global algorithm based on one or three bands: turbidity is first calculated from Rrs681 and then, if < 1 FTU, it is recalculated using an algorithm based on Rrs620.Rrs681/Rrs412. On our data set, this algorithm is suitable for the 0.2-25 FTU turbidity range and for the three sites sampled (mean bias: 3.6 %, rms: 35%, mean quadratic error: 1.4 FTU. This shows that defining global empirical turbidity algorithms in tropical coastal waters is at reach.

  15. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  16. Can an arthroplasty risk score predict bundled care events after total joint arthroplasty?

    Directory of Open Access Journals (Sweden)

    Blair S. Ashley, MD

    2018-03-01

    Full Text Available Background: The validated Arthroplasty Risk Score (ARS predicts the need for postoperative triage to an intensive care setting. We hypothesized that the ARS may also predict hospital length of stay (LOS, discharge disposition, and episode-of-care cost (EOCC. Methods: We retrospectively reviewed a series of 704 patients undergoing primary total hip and knee arthroplasty over 17 months. Patient characteristics, 90-day EOCC, LOS, and readmission rates were compared before and after ARS implementation. Results: ARS implementation was associated with fewer patients going to a skilled nursing or rehabilitation facility after discharge (63% vs 74%, P = .002. There was no difference in LOS, EOCC, readmission rates, or complications. While the adoption of the ARS did not change the mean EOCC, ARS >3 was predictive of high EOCC outlier (odds ratio 2.65, 95% confidence interval 1.40-5.01, P = .003. Increased ARS correlated with increased EOCC (P = .003. Conclusions: Implementation of the ARS was associated with increased disposition to home. It was predictive of high EOCC and should be considered in risk adjustment variables in alternative payment models. Keywords: Bundled payments, Risk stratification, Arthroplasty

  17. Comparison between the Harris- and Oxford Hip Score to evaluate outcomes one-year after total hip arthroplasty

    NARCIS (Netherlands)

    Weel, Hanneke; Lindeboom, Robert; Kuipers, Sander E.; Vervest, Ton M. J. S.

    2017-01-01

    Harris Hip Score (HHS) is a surgeon administered measurement for assessing hip function before and after total hip arthroplasties (THA). Patient reported outcome measurements (PROMs) such as the Oxford Hip Score (OHS) are increasingly used. HHS was compaired to the OHS assessing whether the HHS can

  18. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  19. Sorting variables for each case: a new algorithm to calculate injury severity score (ISS) using SPSS-PC.

    Science.gov (United States)

    Linn, S

    One of the more often used measures of multiple injuries is the injury severity score (ISS). Determination of the ISS is based on the abbreviated injury scale (AIS). This paper suggests a new algorithm to sort the AISs for each case and calculate ISS. The program uses unsorted abbreviated injury scale (AIS) levels for each case and rearranges them in descending order. The first three sorted AISs representing the three most severe injuries of a person are then used to calculate injury severity score (ISS). This algorithm should be useful for analyses of clusters of injuries especially when more patients have multiple injuries.

  20. Sensitivity and Specificity of the Coma Recovery Scale--Revised Total Score in Detection of Conscious Awareness.

    Science.gov (United States)

    Bodien, Yelena G; Carlowicz, Cecilia A; Chatelle, Camille; Giacino, Joseph T

    2016-03-01

    To describe the sensitivity and specificity of Coma Recovery Scale-Revised (CRS-R) total scores in detecting conscious awareness. Data were retrospectively extracted from the medical records of patients enrolled in a specialized disorders of consciousness (DOC) program. Sensitivity and specificity analyses were completed using CRS-R-derived diagnoses of minimally conscious state (MCS) or emerged from minimally conscious state (EMCS) as the reference standard for conscious awareness and the total CRS-R score as the test criterion. A receiver operating characteristic curve was constructed to demonstrate the optimal CRS-R total cutoff score for maximizing sensitivity and specificity. Specialized DOC program. Patients enrolled in the DOC program (N=252, 157 men; mean age, 49y; mean time from injury, 48d; traumatic etiology, n=127; nontraumatic etiology, n=125; diagnosis of coma or vegetative state, n=70; diagnosis of MCS or EMCS, n=182). Not applicable. Sensitivity and specificity of CRS-R total scores in detecting conscious awareness. A CRS-R total score of 10 or higher yielded a sensitivity of .78 for correct identification of patients in MCS or EMCS, and a specificity of 1.00 for correct identification of patients who did not meet criteria for either of these diagnoses (ie, were diagnosed with vegetative state or coma). The area under the curve in the receiver operating characteristic curve analysis is .98. A total CRS-R score of 10 or higher provides strong evidence of conscious awareness but resulted in a false-negative diagnostic error in 22% of patients who demonstrated conscious awareness based on CRS-R diagnostic criteria. A cutoff score of 8 provides the best balance between sensitivity and specificity, accurately classifying 93% of cases. The optimal total score cutoff will vary depending on the user's objective. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. SF-36 total score as a single measure of health-related quality of life: Scoping review.

    Science.gov (United States)

    Lins, Liliane; Carvalho, Fernando Martins

    2016-01-01

    According to the 36-Item Short Form Health Survey questionnaire developers, a global measure of health-related quality of life such as the "SF-36 Total/Global/Overall Score" cannot be generated from the questionnaire. However, studies keep on reporting such measure. This study aimed to evaluate the frequency and to describe some characteristics of articles reporting the SF-36 Total/Global/Overall Score in the scientific literature. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses method was adapted to a scoping review. We performed searches in PubMed, Web of Science, SCOPUS, BVS, and Cochrane Library databases for articles using such scores. We found 172 articles published between 1997 and 2015; 110 (64.0%) of them were published from 2010 onwards; 30.0% appeared in journals with Impact Factor 3.00 or greater. Overall, 129 (75.0%) out of the 172 studies did not specify the method for calculating the "SF-36 Total Score"; 13 studies did not specify their methods but referred to the SF-36 developers' studies or others; and 30 articles used different strategies for calculating such score, the most frequent being arithmetic averaging of the eight SF-36 domains scores. We concluded that the "SF-36 Total/Global/Overall Score" has been increasingly reported in the scientific literature. Researchers should be aware of this procedure and of its possible impacts upon human health.

  2. Optical Algorithms at Satellite Wavelengths for Total Suspended Matter in Tropical Coastal Waters.

    Science.gov (United States)

    Ouillon, Sylvain; Douillet, Pascal; Petrenko, Anne; Neveux, Jacques; Dupouy, Cécile; Froidefond, Jean-Marie; Andréfouët, Serge; Muñoz-Caravaca, Alain

    2008-07-10

    Is it possible to derive accurately Total Suspended Matter concentration or its proxy, turbidity, from remote sensing data in tropical coastal lagoon waters? To investigate this question, hyperspectral remote sensing reflectance, turbidity and chlorophyll pigment concentration were measured in three coral reef lagoons. The three sites enabled us to get data over very diverse environments: oligotrophic and sediment-poor waters in the southwest lagoon of New Caledonia, eutrophic waters in the Cienfuegos Bay (Cuba), and sediment-rich waters in the Laucala Bay (Fiji). In this paper, optical algorithms for turbidity are presented per site based on 113 stations in New Caledonia, 24 stations in Cuba and 56 stations in Fiji. Empirical algorithms are tested at satellite wavebands useful to coastal applications. Global algorithms are also derived for the merged data set (193 stations). The performances of global and local regression algorithms are compared. The best one-band algorithms on all the measurements are obtained at 681 nm using either a polynomial or a power model. The best two-band algorithms are obtained with R412/R620, R443/R670 and R510/R681. Two three-band algorithms based on Rrs620.Rrs681/Rrs412 and Rrs620.Rrs681/Rrs510 also give fair regression statistics. Finally, we propose a global algorithm based on one or three bands: turbidity is first calculated from Rrs681 and then, if turbidity range and for the three sites sampled (mean bias: 3.6 %, rms: 35%, mean quadratic error: 1.4 FTU). This shows that defining global empirical turbidity algorithms in tropical coastal waters is at reach.

  3. Can the pre-operative Western Ontario and McMaster score predict patient satisfaction following total hip arthroplasty?

    Science.gov (United States)

    Rogers, B A; Alolabi, B; Carrothers, A D; Kreder, H J; Jenkinson, R J

    2015-02-01

    In this study we evaluated whether pre-operative Western Ontario and McMaster Universities (WOMAC) osteoarthritis scores can predict satisfaction following total hip arthroplasty (THA). Prospective data for a cohort of patients undergoing THA from two large academic centres were collected, and pre-operative and one-year post-operative WOMAC scores and a 25-point satisfaction questionnaire were obtained for 446 patients. Satisfaction scores were dichotomised into either improvement or deterioration. Scatter plots and Spearman's rank correlation coefficient were used to describe the association between pre-operative WOMAC and one-year post-operative WOMAC scores and patient satisfaction. Satisfaction was compared using receiver operating characteristic (ROC) analysis against pre-operative, post-operative and δ WOMAC scores. We found no relationship between pre-operative WOMAC scores and one-year post-operative WOMAC or satisfaction scores, with Spearman's rank correlation coefficients of 0.16 and -0.05, respectively. The ROC analysis showed areas under the curve (AUC) of 0.54 (pre-operative WOMAC), 0.67 (post-operative WOMAC) and 0.43 (δ WOMAC), respectively, for an improvement in satisfaction. We conclude that the pre-operative WOMAC score does not predict the post-operative WOMAC score or patient satisfaction after THA, and that WOMAC scores can therefore not be used to prioritise patient care. ©2015 The British Editorial Society of Bone & Joint Surgery.

  4. Validation of use of subsets of teeth when applying the total mouth periodontal score (TMPS) system in dogs.

    Science.gov (United States)

    Harvey, Colin E; Laster, Larry; Shofer, Frances S

    2012-01-01

    A total mouth periodontal score (TMPS) system in dogs has been described previously. Use of buccal and palatal/lingual surfaces of all teeth requires observation and recording of 120 gingivitis scores and 120 periodontitis scores. Although the result is a reliable, repeatable assessment of the extent of periodontal disease in the mouth, observing and recording 240 data points is time-consuming. Using data from a previously reported study of periodontal disease in dogs, correlation analysis was used to determine whether use of any of seven different subsets of teeth can generate TMPS subset gingivitis and periodontitis scores that are highly correlated with TMPS all-site, all-teeth scores. Overall, gingivitis scores were less highly correlated than periodontitis scores. The minimal tooth set with a significant intra-class correlation (> or = 0.9 of means of right and left sides) for both gingivitis scores and attachment loss measurements consisted of the buccal surface of the maxillary third incisor canine, third premolar fourth premolar; and first molar teeth; and, the mandibular canine, third premolar, fourth premolar and first molar teeth on one side (9 teeth, 15 root sites). Use of this subset of teeth, which reduces the number of data points per dog from 240 to 30 for gingivitis and periodontitis at each scoring episode, is recommended when calculating the gingivitis and periodontitis scores using the TMPS system.

  5. A Fast Alternating Minimization Algorithm for Nonlocal Vectorial Total Variational Multichannel Image Denoising

    Directory of Open Access Journals (Sweden)

    Rubing Xi

    2014-01-01

    Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.

  6. Using Electromagnetic Algorithm for Total Costs of Sub-contractor Optimization in the Cellular Manufacturing Problem

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Shahriari

    2016-12-01

    Full Text Available In this paper, we present a non-linear binary programing for optimizing a specific cost in cellular manufacturing system in a controlled production condition. The system parameters are determined by the continuous distribution functions. The aim of the presented model is to optimize the total cost of imposed sub-contractors to the manufacturing system by determining how to allocate the machines and parts to each seller. In this system, DM could control the occupation level of each machine in the system. For solving the presented model, we used the electromagnetic meta-heuristic algorithm and Taguchi method for determining the optimal algorithm parameters.

  7. Observations on muscle activity in REM sleep behavior disorder assessed with a semi-automated scoring algorithm

    DEFF Research Database (Denmark)

    Jeppesen, Jesper; Otto, Marit; Frederiksen, Yoon

    2018-01-01

    OBJECTIVES: Rapid eye movement (REM) sleep behavior disorder (RBD) is defined by dream enactment due to a failure of normal muscle atonia. Visual assessment of this muscle activity is time consuming and rater-dependent. METHODS: An EMG computer algorithm for scoring 'tonic', 'phasic' and 'any......' submental muscle activity during REM sleep was evaluated compared with human visual ratings. Subsequently, 52 subjects were analyzed with the algorithm. Duration and maximal amplitude of muscle activity, and self-awareness of RBD symptoms were assessed. RESULTS: The computer algorithm showed high congruency...... sleep without atonia. CONCLUSIONS: Our proposed algorithm was able to detect and rate REM sleep without atonia allowing identification of RBD. Increased duration and amplitude of muscle activity bouts were characteristics of RBD. Quantification of REM sleep without atonia represents a marker of RBD...

  8. A Hybrid Genetic Algorithm to Minimize Total Tardiness for Unrelated Parallel Machine Scheduling with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Chunfeng Liu

    2013-01-01

    Full Text Available The paper presents a novel hybrid genetic algorithm (HGA for a deterministic scheduling problem where multiple jobs with arbitrary precedence constraints are processed on multiple unrelated parallel machines. The objective is to minimize total tardiness, since delays of the jobs may lead to punishment cost or cancellation of orders by the clients in many situations. A priority rule-based heuristic algorithm, which schedules a prior job on a prior machine according to the priority rule at each iteration, is suggested and embedded to the HGA for initial feasible schedules that can be improved in further stages. Computational experiments are conducted to show that the proposed HGA performs well with respect to accuracy and efficiency of solution for small-sized problems and gets better results than the conventional genetic algorithm within the same runtime for large-sized problems.

  9. Distribution of Total Depressive Symptoms Scores and Each Depressive Symptom Item in a Sample of Japanese Employees.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Yamada, Hiroshi; Miyake, Hirotsugu; Furukawa, Toshiaki A; Furukaw, Toshiaki A

    2016-01-01

    In a previous study, we reported that the distribution of total depressive symptoms scores according to the Center for Epidemiologic Studies Depression Scale (CES-D) in a general population is stable throughout middle adulthood and follows an exponential pattern except for at the lowest end of the symptom score. Furthermore, the individual distributions of 16 negative symptom items of the CES-D exhibit a common mathematical pattern. To confirm the reproducibility of these findings, we investigated the distribution of total depressive symptoms scores and 16 negative symptom items in a sample of Japanese employees. We analyzed 7624 employees aged 20-59 years who had participated in the Northern Japan Occupational Health Promotion Centers Collaboration Study for Mental Health. Depressive symptoms were assessed using the CES-D. The CES-D contains 20 items, each of which is scored in four grades: "rarely," "some," "much," and "most of the time." The descriptive statistics and frequency curves of the distributions were then compared according to age group. The distribution of total depressive symptoms scores appeared to be stable from 30-59 years. The right tail of the distribution for ages 30-59 years exhibited a linear pattern with a log-normal scale. The distributions of the 16 individual negative symptom items of the CES-D exhibited a common mathematical pattern which displayed different distributions with a boundary at "some." The distributions of the 16 negative symptom items from "some" to "most" followed a linear pattern with a log-normal scale. The distributions of the total depressive symptoms scores and individual negative symptom items in a Japanese occupational setting show the same patterns as those observed in a general population. These results show that the specific mathematical patterns of the distributions of total depressive symptoms scores and individual negative symptom items can be reproduced in an occupational population.

  10. Total Cerebral Small Vessel Disease MRI Score Is Associated With Cognitive Decline In Executive Function In Patients With Hypertension

    Directory of Open Access Journals (Sweden)

    Renske Uiterwijk

    2016-12-01

    Full Text Available Objectives: Hypertension is a major risk factor for white matter hyperintensities, lacunes, cerebral microbleeds and perivascular spaces, which are MRI markers of cerebral small vessel disease (SVD. Studies have shown associations between these individual MRI markers and cognitive functioning and decline. Recently, a total SVD score was proposed in which the different MRI markers were combined into one measure of SVD, to capture total SVD-related brain damage. We investigated if this SVD score was associated with cognitive decline over 4 years in patients with hypertension. Methods: In this longitudinal cohort study, 130 hypertensive patients (91 patients with uncomplicated hypertension and 39 hypertensive patients with a lacunar stroke were included. They underwent a neuropsychological assessment at baseline and after 4 years. The presence of white matter hyperintensities, lacunes, cerebral microbleeds, and perivascular spaces were rated on baseline MRI. Presence of each individual marker was added to calculate the total SVD score (range 0-4 in each patient. Results: Uncorrected linear regression analyses showed associations between SVD score and decline in overall cognition (p=0.017, executive functioning (p<0.001 and information processing speed (p=0.037, but not with memory (p=0.911. The association between SVD score and decline in overall cognition and executive function remained significant after adjustment for age, sex, education, anxiety and depression score, potential vascular risk factors, patient group and baseline cognitive performance.Conclusions: Our study shows that a total SVD score can predict cognitive decline, specifically in executive function, over 4 years in hypertensive patients. This emphasizes the importance of considering total brain damage due to SVD.

  11. A Novel Risk Score in Predicting Failure or Success for Antegrade Approach to Percutaneous Coronary Intervention of Chronic Total Occlusion: Antegrade CTO Score.

    Science.gov (United States)

    Namazi, Mohammad Hasan; Serati, Ali Reza; Vakili, Hosein; Safi, Morteza; Parsa, Saeed Ali Pour; Saadat, Habibollah; Taherkhani, Maryam; Emami, Sepideh; Pedari, Shamseddin; Vatanparast, Masoomeh; Movahed, Mohammad Reza

    2017-06-01

    Total occlusion of a coronary artery for more than 3 months is defined as chronic total occlusion (CTO). The goal of this study was to develop a risk score in predicting failure or success during attempted percutaneous coronary intervention (PCI) of CTO lesions using antegrade approach. This study was based on retrospective analyses of clinical and angiographic characteristics of CTO lesions that were assessed between February 2012 and February 2014. Success rate was defined as passing through occlusion with successful stent deployment using an antegrade approach. A total of 188 patients were studied. Mean ± SD age was 59 ± 9 years. Failure rate was 33%. In a stepwise multivariate regression analysis, bridging collaterals (OR = 6.7, CI = 1.97-23.17, score = 2), absence of stump (OR = 5.8, CI = 1.95-17.9, score = 2), presence of calcification (OR = 3.21, CI = 1.46-7.07, score = 1), presence of bending (OR = 2.8, CI = 1.28-6.10, score = 1), presence of near side branch (OR = 2.7, CI = 1.08-6.57, score = 1), and absence of retrograde filling (OR = 2.5, CI = 1.03-6.17, score = 1) were independent predictors of PCI failure. A score of 7 or more was associated with 100% failure rate whereas a score of 2 or less was associated with over 80% success rate. Most factors associated with failure of CTO-PCI are related to lesion characteristics. A new risk score (range 0-8) is developed to predict CTO-PCI success or failure rate during antegrade approach as a guide before attempting PCI of CTO lesions.

  12. Research on prediction of agricultural machinery total power based on grey model optimized by genetic algorithm

    Science.gov (United States)

    Xie, Yan; Li, Mu; Zhou, Jin; Zheng, Chang-zheng

    2009-07-01

    Agricultural machinery total power is an important index to reflex and evaluate the level of agricultural mechanization. It is the power source of agricultural production, and is the main factors to enhance the comprehensive agricultural production capacity expand production scale and increase the income of the farmers. Its demand is affected by natural, economic, technological and social and other "grey" factors. Therefore, grey system theory can be used to analyze the development of agricultural machinery total power. A method based on genetic algorithm optimizing grey modeling process is introduced in this paper. This method makes full use of the advantages of the grey prediction model and characteristics of genetic algorithm to find global optimization. So the prediction model is more accurate. According to data from a province, the GM (1, 1) model for predicting agricultural machinery total power was given based on the grey system theories and genetic algorithm. The result indicates that the model can be used as agricultural machinery total power an effective tool for prediction.

  13. Hip disability and osteoarthritis outcome score (HOOS)--validity and responsiveness in total hip replacement

    DEFF Research Database (Denmark)

    Nilsdotter, Anna K; Lohmander, L Stefan; Klässbo, Maria

    2003-01-01

    The aim of the study was to evaluate if physical functions usually associated with a younger population were of importance for an older population, and to construct an outcome measure for hip osteoarthritis with improved responsiveness compared to the Western Ontario McMaster osteoarthritis score...

  14. Can computer assistance improve the clinical and functional scores in total knee arthroplasty?

    Science.gov (United States)

    Hernández-Vaquero, Daniel; Suarez-Vazquez, Abelardo; Iglesias-Fernandez, Susana

    2011-12-01

    Surgical navigation in TKA facilitates better alignment; however, it is unclear whether improved alignment alters clinical evolution and midterm and long-term complication rates. We determined the alignment differences between patients with standard, manual, jig-based TKAs and patients with navigation-based TKAs, and whether any differences would modify function, implant survival, and/or complications. We retrospectively reviewed 97 patients (100 TKAs) undergoing TKAs for minimal preoperative deformities. Fifty TKAs were performed with an image-free surgical navigation system and the other 50 with a standard technique. We compared femoral angle (FA), tibial angle (TA), and femorotibial angle (FTA) and determined whether any differences altered clinical or functional scores, as measured by the Knee Society Score (KSS), or complications. Seventy-three patients (75 TKAs) had a minimum followup of 8 years (mean, 8.3 years; range, 8-9.1 years). All patients included in the surgical navigation group had a FTA between 177° and 182º. We found no differences in the KSS or implant survival between the two groups and no differences in complication rates, although more complications occurred in the standard technique group (seven compared with two in the surgical navigation group). In the midterm, we found no difference in functional and clinical scores or implant survival between TKAs performed with and without the assistance of a navigation system. Level II, therapeutic study. See the Guidelines online for a complete description of levels of evidence.

  15. ACCURATUM: improved calcium volume scoring using a mesh-based algorithm - a phantom study

    International Nuclear Information System (INIS)

    Saur, Stefan C.; Szekely, Gabor; Alkadhi, Hatem; Desbiolles, Lotus; Cattin, Philippe C.

    2009-01-01

    To overcome the limitations of the classical volume scoring method for quantifying coronary calcifications, including accuracy, variability between examinations, and dependency on plaque density and acquisition parameters, a mesh-based volume measurement method has been developed. It was evaluated and compared with the classical volume scoring method for accuracy, i.e., the normalized volume (measured volume/ground-truthed volume), and for variability between examinations (standard deviation of accuracy). A cardiac computed-tomography (CT) phantom containing various cylindrical calcifications was scanned using different tube voltages and reconstruction kernels, at various positions and orientations on the CT table and using different slice thicknesses. Mean accuracy for all plaques was significantly higher (p<0.0001) for the proposed method (1.220±0.507) than for the classical volume score (1.896±1.095). In contrast to the classical volume score, plaque density (p=0.84), reconstruction kernel (p=0.19), and tube voltage (p=0.27) had no impact on the accuracy of the developed method. In conclusion, the method presented herein is more accurate than classical calcium scoring and is less dependent on tube voltage, reconstruction kernel, and plaque density. (orig.)

  16. Optimizing multiple sequence alignments using a genetic algorithm based on three objectives: structural information, non-gaps percentage and totally conserved columns.

    Science.gov (United States)

    Ortuño, Francisco M; Valenzuela, Olga; Rojas, Fernando; Pomares, Hector; Florido, Javier P; Urquiza, Jose M; Rojas, Ignacio

    2013-09-01

    Multiple sequence alignments (MSAs) are widely used approaches in bioinformatics to carry out other tasks such as structure predictions, biological function analyses or phylogenetic modeling. However, current tools usually provide partially optimal alignments, as each one is focused on specific biological features. Thus, the same set of sequences can produce different alignments, above all when sequences are less similar. Consequently, researchers and biologists do not agree about which is the most suitable way to evaluate MSAs. Recent evaluations tend to use more complex scores including further biological features. Among them, 3D structures are increasingly being used to evaluate alignments. Because structures are more conserved in proteins than sequences, scores with structural information are better suited to evaluate more distant relationships between sequences. The proposed multiobjective algorithm, based on the non-dominated sorting genetic algorithm, aims to jointly optimize three objectives: STRIKE score, non-gaps percentage and totally conserved columns. It was significantly assessed on the BAliBASE benchmark according to the Kruskal-Wallis test (P algorithm also outperforms other aligners, such as ClustalW, Multiple Sequence Alignment Genetic Algorithm (MSA-GA), PRRP, DIALIGN, Hidden Markov Model Training (HMMT), Pattern-Induced Multi-sequence Alignment (PIMA), MULTIALIGN, Sequence Alignment Genetic Algorithm (SAGA), PILEUP, Rubber Band Technique Genetic Algorithm (RBT-GA) and Vertical Decomposition Genetic Algorithm (VDGA), according to the Wilcoxon signed-rank test (P 0.05) with the advantage of being able to use less structures. Structural information is included within the objective function to evaluate more accurately the obtained alignments. The source code is available at http://www.ugr.es/~fortuno/MOSAStrE/MO-SAStrE.zip.

  17. GPU Based N-Gram String Matching Algorithm with Score Table Approach for String Searching in Many Documents

    Science.gov (United States)

    Srinivasa, K. G.; Shree Devi, B. N.

    2017-10-01

    String searching in documents has become a tedious task with the evolution of Big Data. Generation of large data sets demand for a high performance search algorithm in areas such as text mining, information retrieval and many others. The popularity of GPU's for general purpose computing has been increasing for various applications. Therefore it is of great interest to exploit the thread feature of a GPU to provide a high performance search algorithm. This paper proposes an optimized new approach to N-gram model for string search in a number of lengthy documents and its GPU implementation. The algorithm exploits GPGPUs for searching strings in many documents employing character level N-gram matching with parallel Score Table approach and search using CUDA API. The new approach of Score table used for frequency storage of N-grams in a document, makes the search independent of the document's length and allows faster access to the frequency values, thus decreasing the search complexity. The extensive thread feature in a GPU has been exploited to enable parallel pre-processing of trigrams in a document for Score Table creation and parallel search in huge number of documents, thus speeding up the whole search process even for a large pattern size. Experiments were carried out for many documents of varied length and search strings from the standard Lorem Ipsum text on NVIDIA's GeForce GT 540M GPU with 96 cores. Results prove that the parallel approach for Score Table creation and searching gives a good speed up than the same approach executed serially.

  18. Robust total energy demand estimation with a hybrid Variable Neighborhood Search – Extreme Learning Machine algorithm

    International Nuclear Information System (INIS)

    Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.

    2016-01-01

    Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.

  19. Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model

    Directory of Open Access Journals (Sweden)

    WANG Bin

    2015-06-01

    Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.

  20. The impact of CT radiation dose reduction and iterative reconstruction algorithms from four different vendors on coronary calcium scoring

    Energy Technology Data Exchange (ETDEWEB)

    Willemink, Martin J.; Takx, Richard A.P.; Jong, Pim A. de; Budde, Ricardo P.J.; Schilham, Arnold M.R.; Leiner, Tim [Utrecht University Medical Center, Department of Radiology, Utrecht (Netherlands); Bleys, Ronald L.A.W. [Utrecht University Medical Center, Department of Anatomy, Utrecht (Netherlands); Das, Marco; Wildberger, Joachim E. [Maastricht University Medical Center, Department of Radiology, Maastricht (Netherlands); Prokop, Mathias [Radboud University Nijmegen Medical Center, Department of Radiology, Nijmegen (Netherlands); Buls, Nico; Mey, Johan de [UZ Brussel, Department of Radiology, Brussels (Belgium)

    2014-09-15

    To analyse the effects of radiation dose reduction and iterative reconstruction (IR) algorithms on coronary calcium scoring (CCS). Fifteen ex vivo human hearts were examined in an anthropomorphic chest phantom using computed tomography (CT) systems from four vendors and examined at four dose levels using unenhanced prospectively ECG-triggered protocols. Tube voltage was 120 kV and tube current differed between protocols. CT data were reconstructed with filtered back projection (FBP) and reduced dose CT data with IR. CCS was quantified with Agatston scores, calcification mass and calcification volume. Differences were analysed with the Friedman test. Fourteen hearts showed coronary calcifications. Dose reduction with FBP did not significantly change Agatston scores, calcification volumes and calcification masses (P > 0.05). Maximum differences in Agatston scores were 76, 26, 51 and 161 units, in calcification volume 97, 27, 42 and 162 mm{sup 3}, and in calcification mass 23, 23, 20 and 48 mg, respectively. IR resulted in a trend towards lower Agatston scores and calcification volumes with significant differences for one vendor (P < 0.05). Median relative differences between reference FBP and reduced dose IR for Agatston scores remained within 2.0-4.6 %, 1.0-5.3 %, 1.2-7.7 % and 2.6-4.5 %, for calcification volumes within 2.4-3.9 %, 1.0-5.6 %, 1.1-6.4 % and 3.7-4.7 %, for calcification masses within 1.9-4.1 %, 0.9-7.8 %, 2.9-4.7 % and 2.5-3.9 %, respectively. IR resulted in increased, decreased or similar calcification masses. CCS derived from standard FBP acquisitions was not affected by radiation dose reductions up to 80 %. IR resulted in a trend towards lower Agatston scores and calcification volumes. (orig.)

  1. Use of the Liverpool Elbow Score as a postal questionnaire for the assessment of outcome after total elbow arthroplasty.

    Science.gov (United States)

    Ashmore, Alexander M; Gozzard, Charles; Blewitt, Neil

    2007-01-01

    The Liverpool Elbow Score (LES) is a newly developed, validated elbow-specific score. It consists of a patient-answered questionnaire (PAQ) and a clinical assessment. The purpose of this study was to determine whether the PAQ portion of the LES could be used independently as a postal questionnaire for the assessment of outcome after total elbow arthroplasty and to correlate the LES and the Mayo Elbow Performance Score (MEPS). A series of 51 total elbow replacements were reviewed by postal questionnaire. Patients then attended the clinic for assessment by use of both the LES and the MEPS. There was an excellent response rate to the postal questionnaire (98%), and 44 elbows were available for clinical review. Good correlation was shown between the LES and the MEPS (Spearman correlation coefficient, 0.84; P PAQ portion of the LES and the MEPS (Spearman correlation coefficient, 0.76; P PAQ component and the MEPS, suggesting that outcome assessment is possible by postal questionnaire.

  2. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    OpenAIRE

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    [Background]Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item th...

  3. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  4. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  5. A new algorithm for histopathological diagnosis of periprosthetic infection using CD15 focus score and computer program CD15 Quantifier

    Directory of Open Access Journals (Sweden)

    V. Krenn

    2015-01-01

    Full Text Available Introduction. A simple microscopic diagnostic quantification system for neutrophile granulocytes (NG was developed evaluating a single focal point (CD15 focus score which enables the detection of bacterial infection in SLIM (synoviallike interface membrane Additionally a diagnostic algorithm is proposed how to use the CD15 focus score and the quantification software (CD15 Quantifier. Methods. 91 SLIM removed during revision surgery for histopathological diagnosis (hip; n=59 and knee; n=32 underwent histopathological classification according to the SLIM-consensus classification. NG where identified immunohistochemically by means of a CD15-specific monoclonal antibody exhibiting an intense granular cytoplasmic staining pattern. This pattern is different from CD15 expression in macrophages showing a pale and homogenous expression in mononuclear cells. The quantitative evaluation of CD15-positive neutrophils granulocytes (CD15NG used the principle of maximum focal infiltration (focus together with an assessment of a single focal point (approximately 0.3 mm2. This immunohistochemical data made it possible to develop CD15 Quantifier software which automatically quantifies CD15NG. Results. SLIM-cases with positive microbiological diagnosis (n=47 have significantly (p<0.001, Mann-Whitney U test more CD15NG/focal point than cases with negative microbiological diagnosis (n=44. 50 CD15NG/focal point were identified as the optimum threshold when diagnosing infection of periprosthetic joints using the CD15 focus score. If the microbiological findings are used as a ‘gold standard’ the diagnostic sensitivity is 0.83, specificity is 0.864. (PPV: 0.87; NPV: 0.83; accuracy 0.846; AUC: 0.878. The evaluation findings for the preparations using the CD15 Quantifier (n=31 deviated in an average of 12 cells from the histopathological evaluation findings (CD15focus score. From a cell-count greater 62 CD15 Quantifier needs on average 32 seconds less than the

  6. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    Science.gov (United States)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  7. Propensity scores-potential outcomes framework to incorporate severity probabilities in the highway safety manual crash prediction algorithm.

    Science.gov (United States)

    Sasidharan, Lekshmi; Donnell, Eric T

    2014-10-01

    Accurate estimation of the expected number of crashes at different severity levels for entities with and without countermeasures plays a vital role in selecting countermeasures in the framework of the safety management process. The current practice is to use the American Association of State Highway and Transportation Officials' Highway Safety Manual crash prediction algorithms, which combine safety performance functions and crash modification factors, to estimate the effects of safety countermeasures on different highway and street facility types. Many of these crash prediction algorithms are based solely on crash frequency, or assume that severity outcomes are unchanged when planning for, or implementing, safety countermeasures. Failing to account for the uncertainty associated with crash severity outcomes, and assuming crash severity distributions remain unchanged in safety performance evaluations, limits the utility of the Highway Safety Manual crash prediction algorithms in assessing the effect of safety countermeasures on crash severity. This study demonstrates the application of a propensity scores-potential outcomes framework to estimate the probability distribution for the occurrence of different crash severity levels by accounting for the uncertainties associated with them. The probability of fatal and severe injury crash occurrence at lighted and unlighted intersections is estimated in this paper using data from Minnesota. The results show that the expected probability of occurrence of fatal and severe injury crashes at a lighted intersection was 1 in 35 crashes and the estimated risk ratio indicates that the respective probabilities at an unlighted intersection was 1.14 times higher compared to lighted intersections. The results from the potential outcomes-propensity scores framework are compared to results obtained from traditional binary logit models, without application of propensity scores matching. Traditional binary logit analysis suggests that

  8. Diagnosis of bacterial vaginosis in a rural setup: Comparison of clinical algorithm, smear scoring and culture by semiquantitative technique

    OpenAIRE

    Rao P; Devi S; Shriyan A; Rajaram M; Jagdishchandra K

    2004-01-01

    This study was undertaken to estimate the prevalence of bacterial vaginosis (BV) and other sexually transmitted infections (STIs) in a rural set up and compare the smear scoring system to that of culture by semiquantitative technique. A total of 505 married women, who were in sexually active age group of 15-44 years, were selected from three different villages. High vaginal swabs, endocervical swabs, vaginal discharge and blood were collected and processed in the laboratory. Overall prevalenc...

  9. Coronary collateral circulation in patients with chronic coronary total occlusion; its relationship with cardiac risk markers and SYNTAX score.

    Science.gov (United States)

    Börekçi, A; Gür, M; Şeker, T; Baykan, A O; Özaltun, B; Karakoyun, S; Karakurt, A; Türkoğlu, C; Makça, I; Çaylı, M

    2015-09-01

    Compared to patients without a collateral supply, long-term cardiac mortality is reduced in patients with well-developed coronary collateral circulation (CCC). Cardiovascular risk markers, such as N-terminal pro-brain natriuretic peptide (NT-proBNP), high-sensitive C-reactive protein (hs-CRP) and high-sensitive cardiac troponin T (hs-cTnT) are independent predictors for cardiovascular mortality. The main goal of this study was to examine the relationship between CCC and cardiovascular risk markers. We prospectively enrolled 427 stable coronary artery disease patients with chronic total occlusion (mean age: 57.5±11.1 years). The patients were divided into two groups, according to their Rentrop scores: (a) poorly developed CCC group (Rentrop 0 and 1) and (b) well-developed CCC group (Rentrop 2 and 3). NT-proBNP, hs-CRP, hs-cTnT, uric acid and other biochemical markers were also measured. The SYNTAX score was calculated for all patients. The patients in the poorly developed CCC group had higher frequencies of diabetes and hypertension (prisk markers, such as NT-proBNP, hs-cTnT and hs-CRP are independently associated with CCC in stable coronary artery disease with chronic total occlusion. © The Author(s) 2014.

  10. Knee injury and Osteoarthritis Outcome Score (KOOS – validation and comparison to the WOMAC in total knee replacement

    Directory of Open Access Journals (Sweden)

    Roos Ewa M

    2003-05-01

    Full Text Available Abstract Background The Knee injury and Osteoarthritis Outcome Score (KOOS is an extension of the Western Ontario and McMaster Universities Osteoarthrtis Index (WOMAC, the most commonly used outcome instrument for assessment of patient-relevant treatment effects in osteoarthritis. KOOS was developed for younger and/or more active patients with knee injury and knee osteoarthritis and has in previous studies on these groups been the more responsive instrument compared to the WOMAC. Some patients eligible for total knee replacement have expectations of more demanding physical functions than required for daily living. This encouraged us to study the use of the Knee injury and Osteoarthritis Outcome Score (KOOS to assess the outcome of total knee replacement. Methods We studied the test-retest reliability, validity and responsiveness of the Swedish version LK 1.0 of the KOOS when used to prospectively evaluate the outcome of 105 patients (mean age 71.3, 66 women after total knee replacement. The follow-up rates at 6 and 12 months were 92% and 86%, respectively. Results The intraclass correlation coefficients were over 0.75 for all subscales indicating sufficient test-retest reliability. Bland-Altman plots confirmed this finding. Over 90% of the patients regarded improvement in the subscales Pain, Symptoms, Activities of Daily Living, and knee-related Quality of Life to be extremely or very important when deciding to have their knee operated on indicating good content validity. The correlations found in comparison to the SF-36 indicated the KOOS measured expected constructs. The most responsive subscale was knee-related Quality of Life. The effect sizes of the five KOOS subscales at 12 months ranged from 1.08 to 3.54 and for the WOMAC from 1.65 to 2.56. Conclusion The Knee injury and Osteoarthritis Outcome Score (KOOS is a valid, reliable, and responsive outcome measure in total joint replacement. In comparison to the WOMAC, the KOOS improved validity

  11. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  12. The TOMS V9 Algorithm for OMPS Nadir Mapper Total Ozone: An Enhanced Design That Ensures Data Continuity

    Science.gov (United States)

    Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.

    2015-12-01

    The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.

  13. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  14. Waste Load Allocation Based on Total Maximum Daily Load Approach Using the Charged System Search (CSS Algorithm

    Directory of Open Access Journals (Sweden)

    Elham Faraji

    2016-03-01

    Full Text Available In this research, the capability of a charged system search algorithm (CSS in handling water management optimization problems is investigated. First, two complex mathematical problems are solved by CSS and the results are compared with those obtained from other metaheuristic algorithms. In the last step, the optimization model developed by the CSS algorithm is applied to the waste load allocation in rivers based on the total maximum daily load (TMDL concept. The results are presented in Tables and Figures for easy comparison. The study indicates the superiority of the CSS algorithm in terms of its speed and performance over the other metaheuristic algorithms while its precision in water management optimization problems is verified.

  15. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    Science.gov (United States)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  16. Anesthesia Technique and Mortality after Total Hip or Knee Arthroplasty: A Retrospective, Propensity Score-matched Cohort Study.

    Science.gov (United States)

    Perlas, Anahi; Chan, Vincent W S; Beattie, Scott

    2016-10-01

    This propensity score-matched cohort study evaluates the effect of anesthetic technique on a 30-day mortality after total hip or knee arthroplasty. All patients who had hip or knee arthroplasty between January 1, 2003, and December 31, 2014, were evaluated. The principal exposure was spinal versus general anesthesia. The primary outcome was 30-day mortality. Secondary outcomes were (1) perioperative myocardial infarction; (2) a composite of major adverse cardiac events that includes cardiac arrest, myocardial infarction, or newly diagnosed arrhythmia; (3) pulmonary embolism; (4) major blood loss; (5) hospital length of stay; and (6) operating room procedure time. A propensity score-matched-pair analysis was performed using a nonparsimonious logistic regression model of regional anesthetic use. We identified 10,868 patients, of whom 8,553 had spinal anesthesia and 2,315 had general anesthesia. Ninety-two percent (n = 2,135) of the patients who had general anesthesia were matched to similar patients who did not have general anesthesia. In the matched cohort, the 30-day mortality rate was 0.19% (n = 4) in the spinal anesthesia group and 0.8% (n = 17) in the general anesthesia group (risk ratio, 0.42; 95% CI, 0.21 to 0.83; P = 0.0045). Spinal anesthesia was also associated with a shorter hospital length of stay (5.7 vs. 6.6 days; P anesthesia and lower 30-day mortality, as well as a shorter hospital length of stay, after elective joint replacement surgery.

  17. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  18. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    Directory of Open Access Journals (Sweden)

    Shinichiro Tomitaka

    2016-10-01

    Full Text Available Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items. The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an

  19. Algorithm for the automatic computation of the modified Anderson-Wilkins acuteness score of ischemia from the pre-hospital ECG in ST-segment elevation myocardial infarction

    DEFF Research Database (Denmark)

    Fakhri, Yama; Sejersten-Ripa, Maria; Schoos, Mikkel Malby

    2017-01-01

    BACKGROUND: The acuteness score (based on the modified Anderson-Wilkins score) estimates the acuteness of ischemia based on ST-segment, Q-wave and T-wave measurements obtained from the electrocardiogram (ECG) in patients with ST Elevation Myocardial Infarction (STEMI). The score (range 1 (least...... the acuteness score. METHODS: We scored 50 pre-hospital ECGs from STEMI patients, manually and by the automated algorithm. We assessed the reliability test between the manual and automated algorithm by interclass correlation coefficient (ICC) and Bland-Altman plot. RESULTS: The ICC was 0.84 (95% CI 0.......72-0.91), PECGs, all within the upper (1.46) and lower (-1.12) limits...

  20. Enhancing Accuracy of Sediment Total Load Prediction Using Evolutionary Algorithms (Case Study: Gotoorchay River

    Directory of Open Access Journals (Sweden)

    K. Roshangar

    2016-09-01

    Full Text Available Introduction: Exact prediction of transported sediment rate by rivers in water resources projects is of utmost importance. Basically erosion and sediment transport process is one of the most complexes hydrodynamic. Although different studies have been developed on the application of intelligent models based on neural, they are not widely used because of lacking explicitness and complexity governing on choosing and architecting of proper network. In this study, a Genetic expression programming model (as an important branches of evolutionary algorithems for predicting of sediment load is selected and investigated as an intelligent approach along with other known classical and imperical methods such as Larsen´s equation, Engelund-Hansen´s equation and Bagnold´s equation. Materials and Methods: In this study, in order to improve explicit prediction of sediment load of Gotoorchay, located in Aras catchment, Northwestern Iran latitude: 38°24´33.3˝ and longitude: 44°46´13.2˝, genetic programming (GP and Genetic Algorithm (GA were applied. Moreover, the semi-empirical models for predicting of total sediment load and rating curve have been used. Finally all the methods were compared and the best ones were introduced. Two statistical measures were used to compare the performance of the different models, namely root mean square error (RMSE and determination coefficient (DC. RMSE and DC indicate the discrepancy between the observed and computed values. Results and Discussions: The statistical characteristics results obtained from the analysis of genetic programming method for both selected model groups indicated that the model 4 including the only discharge of the river, relative to other studied models had the highest DC and the least RMSE in the testing stage (DC= 0.907, RMSE= 0.067. Although there were several parameters applied in other models, these models were complicated and had weak results of prediction. Our results showed that the model 9

  1. Alternative Payment Models Should Risk-Adjust for Conversion Total Hip Arthroplasty: A Propensity Score-Matched Study.

    Science.gov (United States)

    McLawhorn, Alexander S; Schairer, William W; Schwarzkopf, Ran; Halsey, David A; Iorio, Richard; Padgett, Douglas E

    2017-12-06

    For Medicare beneficiaries, hospital reimbursement for nonrevision hip arthroplasty is anchored to either diagnosis-related group code 469 or 470. Under alternative payment models, reimbursement for care episodes is not further risk-adjusted. This study's purpose was to compare outcomes of primary total hip arthroplasty (THA) vs conversion THA to explore the rationale for risk adjustment for conversion procedures. All primary and conversion THAs from 2007 to 2014, excluding acute hip fractures and cancer patients, were identified in the National Surgical Quality Improvement Program database. Conversion and primary THA patients were matched 1:1 using propensity scores, based on preoperative covariates. Multivariable logistic regressions evaluated associations between conversion THA and 30-day outcomes. A total of 2018 conversions were matched to 2018 primaries. There were no differences in preoperative covariates. Conversions had longer operative times (148 vs 95 minutes, P reimbursement models shift toward bundled payment paradigms, conversion THA appears to be a procedure for which risk adjustment is appropriate. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A statistical mechanical interpretation of algorithmic information theory: Total statistical mechanical interpretation based on physical argument

    International Nuclear Information System (INIS)

    Tadaki, Kohtaro

    2010-01-01

    The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T is an element of (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.

  3. A new algorithm to determine the total radiated power at ASDEX upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Gloeggler, Stephan; Bernert, Matthias; Eich, Thomas [Max Planck Institute for Plasma Physics, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: The ASDEX Upgrade Team

    2016-07-01

    Radiation is an essential part of the power balance in a fusion plasma. In future fusion devices about 90% of the power will have to be dissipated, mainly by radiation. For the development of an appropriate operational scenario, information about the absolute level of plasma radiation (P{sub rad,tot}) is crucial. Bolometers are used to measure the radiated power, however, an algorithm is required to derive the absolute power out of many line-integrated measurements. The currently used algorithm (BPD) was developed for the main chamber radiation. It underestimates the divertor radiation as its basic assumptions are not satisfied in this region. Therefore, a new P{sub rad,tot} algorithm is presented. It applies an Abel inversion on the main chamber and uses empirically based assumptions for poloidal asymmetries and the divertor radiation. To benchmark the new algorithm, synthetic emissivity profiles are used. On average, the new Abel inversion based algorithm deviates by only 10% from the nominal synthetic value while BPD is about 25% too low. With both codes time traces of ASDEX Upgrade discharges are calculated. The analysis of these time traces shows that the underestimation of the divertor radiation can have significant consequences on the accuracy of BPD while the new algorithm is shown to be stable.

  4. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  5. Noise reduction technology reduces radiation dose in chronic total occlusions percutaneous coronary intervention: a propensity score-matched analysis.

    Science.gov (United States)

    Maccagni, Davide; Benincasa, Susanna; Bellini, Barbara; Candilio, Luciano; Poletti, Enrico; Carlino, Mauro; Colombo, Antonio; Azzalini, Lorenzo

    2018-03-23

    Chronic total occlusions (CTO) percutaneous coronary intervention (PCI) is associated with high radiation dose. Our study aim was to evaluate the impact of the implementation of a noise reduction technology (NRT) on patient radiation dose during CTO PCI. A total of 187 CTO PCIs performed between February 2016 and May 2017 were analyzed according to the angiographic systems utilized: Standard (n = 60) versus NRT (n = 127). Propensity score matching (PSM) was performed to control for differences in baseline characteristics. Primary endpoints were Cumulative Air Kerma at Interventional Reference Point (AK at IRP), which correlates with patient's tissue reactions; and Kerma Area Product (KAP), a surrogate measure of patient's risk of stochastic radiation effects. An Efficiency Index (defined as fluoroscopy time/AK at IRP) was calculated for each procedure. Image quality was evaluated using a 5-grade Likert-like scale. After PSM, n = 55 pairs were identified. Baseline and angiographic characteristics were well matched between groups. Compared to the Standard system, NRT was associated with lower AK at IRP [2.38 (1.80-3.66) vs. 3.24 (2.04-5.09) Gy, p = 0.035], a trend towards reduction for KAP [161 (93-244) vs. 203 (136-363) Gycm 2 , p = 0.069], and a better Efficiency Index [16.75 (12.73-26.27) vs. 13.58 (9.92-17.63) min/Gy, p = 0.003]. Image quality was similar between the two groups (4.39 ± 0.53 Standard vs. 4.34 ± 0.47 NRT, p = 0.571). In conclusion, compared with a Standard system, the use of NRT in CTO PCI is associated with lower patient radiation dose and similar image quality.

  6. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    Science.gov (United States)

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  7. Diagnosis of bacterial vaginosis in a rural setup: comparison of clinical algorithm, smear scoring and culture by semiquantitative technique.

    Science.gov (United States)

    Rao, P S; Devi, S; Shriyan, A; Rajaram, M; Jagdishchandra, K

    2004-01-01

    This study was undertaken to estimate the prevalence of bacterial vaginosis (BV) and other sexually transmitted infections (STIs) in a rural set up and compare the smear scoring system to that of culture by semiquantitative technique. A total of 505 married women, who were in sexually active age group of 15-44 years, were selected from three different villages. High vaginal swabs, endocervical swabs, vaginal discharge and blood were collected and processed in the laboratory. Overall prevalence of 29% reproductive tract infection was detected. Endogenous infection was commonly observed (27.92%), and very low prevalence of STIs (Trichomonas 1.18%, Syphilis 0%, Gonorrhea 0%) was detected. Diagnosis of BV was possible in 104 (20.5%) women by smear alone and 88 (17.42%) women by semiquantitative culture.

  8. Diagnosis of bacterial vaginosis in a rural setup: Comparison of clinical algorithm, smear scoring and culture by semiquantitative technique

    Directory of Open Access Journals (Sweden)

    Rao P

    2004-01-01

    Full Text Available This study was undertaken to estimate the prevalence of bacterial vaginosis (BV and other sexually transmitted infections (STIs in a rural set up and compare the smear scoring system to that of culture by semiquantitative technique. A total of 505 married women, who were in sexually active age group of 15-44 years, were selected from three different villages. High vaginal swabs, endocervical swabs, vaginal discharge and blood were collected and processed in the laboratory. Overall prevalence of 29% reproductive tract infection was detected. Endogenous infection was commonly observed (27.92%, and very low prevalence of STIs (Trichomonas 1.18%, Syphilis 0%, Gonorrhea 0% was detected. Diagnosis of BV was possible in 104 (20.5% women by smear alone and 88 (17.42% women by semiquantitative culture.

  9. Association between Diet-Quality Scores, Adiposity, Total Cholesterol and Markers of Nutritional Status in European Adults: Findings from the Food4Me Study

    Directory of Open Access Journals (Sweden)

    Rosalind Fallaize

    2018-01-01

    Full Text Available Diet-quality scores (DQS, which are developed across the globe, are used to define adherence to specific eating patterns and have been associated with risk of coronary heart disease and type-II diabetes. We explored the association between five diet-quality scores (Healthy Eating Index, HEI; Alternate Healthy Eating Index, AHEI; MedDietScore, MDS; PREDIMED Mediterranean Diet Score, P-MDS; Dutch Healthy Diet-Index, DHDI and markers of metabolic health (anthropometry, objective physical activity levels (PAL, and dried blood spot total cholesterol (TC, total carotenoids, and omega-3 index in the Food4Me cohort, using regression analysis. Dietary intake was assessed using a validated Food Frequency Questionnaire. Participants (n = 1480 were adults recruited from seven European Union (EU countries. Overall, women had higher HEI and AHEI than men (p < 0.05, and scores varied significantly between countries. For all DQS, higher scores were associated with lower body mass index, lower waist-to-height ratio and waist circumference, and higher total carotenoids and omega-3-index (p trends < 0.05. Higher HEI, AHEI, DHDI, and P-MDS scores were associated with increased daily PAL, moderate and vigorous activity, and reduced sedentary behaviour (p trend < 0.05. We observed no association between DQS and TC. To conclude, higher DQS, which reflect better dietary patterns, were associated with markers of better nutritional status and metabolic health.

  10. Impact of the Occlusion Duration on the Performance of J-CTO Score in Predicting Failure of Percutaneous Coronary Intervention for Chronic Total Occlusion.

    Science.gov (United States)

    de Castro-Filho, Antonio; Lamas, Edgar Stroppa; Meneguz-Moreno, Rafael A; Staico, Rodolfo; Siqueira, Dimytri; Costa, Ricardo A; Braga, Sergio N; Costa, J Ribamar; Chamié, Daniel; Abizaid, Alexandre

    2017-06-01

    The present study examined the association between Multicenter CTO Registry in Japan (J-CTO) score in predicting failure of percutaneous coronary intervention (PCI) correlating with the estimated duration of chronic total occlusion (CTO). The J-CTO score does not incorporate estimated duration of the occlusion. This was an observational retrospective study that involved all consecutive procedures performed at a single tertiary-care cardiology center between January 2009 and December 2014. A total of 174 patients, median age 59.5 years (interquartile range [IQR], 53-65 years), undergoing CTO-PCI were included. The median estimated occlusion duration was 7.5 months (IQR, 4.0-12.0 months). The lesions were classified as easy (score = 0), intermediate (score = 1), difficult (score = 2), and very difficult (score ≥3) in 51.1%, 33.9%, 9.2%, and 5.7% of the patients, respectively. Failure rate significantly increased with higher J-CTO score (7.9%, 20.3%, 50.0%, and 70.0% in groups with J-CTO scores of 0, 1, 2, and ≥3, respectively; PJ-CTO score predicted failure of CTO-PCI independently of the estimated occlusion duration (P=.24). Areas under receiver-operating characteristic curves were computed and it was observed that for each occlusion time period, the discriminatory capacity of the J-CTO score in predicting CTO-PCI failure was good, with a C-statistic >0.70. The estimated duration of occlusion had no influence on the J-CTO score performance in predicting failure of PCI in CTO lesions. The probability of failure was mainly determined by grade of lesion complexity.

  11. SENSITIVITY AND SPECIFICITY OF INDIVIDUAL BERG BALANCE ITEMS COMPARED WITH THE TOTAL SCORE TO PREDICT FALLS IN COMMUNITY DWELLING ELDERLY INDIVIDUALS

    Directory of Open Access Journals (Sweden)

    Hazel Denzil Dias

    2014-09-01

    Full Text Available Background: Falls are a major problem in the elderly leading to increased morbidity and mortality in this population. Scores from objective clinical measures of balance have frequently been associated with falls in older adults. The Berg Balance Score (BBS which is a frequently used scale to test balance impairments in the elderly ,takes time to perform and has been found to have scoring inconsistencies. The purpose was to determine if individual items or a group of BBS items would have better accuracy than the total BBS in classifying community dwelling elderly individuals according to fall history. Method: 60 community dwelling elderly individuals were chosen based on a history of falls in this cross sectional study. Each BBS item was dichotomized at three points along the scoring scale of 0 – 4: between scores of 1 and 2, 2 and 3, and 3 and 4. Sensitivity (Sn, specificity (Sp, and positive (+LR and negative (-LR likelihood ratios were calculated for all items for each scoring dichotomy based on their accuracy in classifying subjects with a history of multiple falls. These findings were compared with the total BBS score where the cut-off score was derived from receiver operating characteristic curve analysis. Results: On analysing a combination of BBS items, B9 and B11 were found to have the best sensitivity and specificity when considered together. However the area under the curve of these items was 0.799 which did not match that of the total score (AUC= 0.837. A, combination of 4 BBS items - B9 B11 B12 and B13 also had good Sn and Sp but the AUC was 0.815. The combination with the AUC closest to that of the total score was a combination items B11 and B13. (AUC= 0.824. hence these two items can be used as the best predictor of falls with a cut off of 6.5 The ROC curve of the Total Berg balance Scale scores revealed a cut off score of 48.5. Conclusion: This study showed that combination of items B11 and B13 may be best predictors of falls in

  12. SENSITIVITY AND SPECIFICITY OF INDIVIDUAL BERG BALANCE ITEMS COMPARED WITH THE TOTAL SCORE TO PREDICT FALLS IN COMMUNITY DWELLING ELDERLY INDIVIDUALS

    Directory of Open Access Journals (Sweden)

    Hazel Denzil Dias

    2014-06-01

    Full Text Available Background: Falls are a major problem in the elderly leading to increased morbidity and mortality in this population. Scores from objective clinical measures of balance have frequently been associated with falls in older adults. The Berg Balance Score (BBS which is a frequently used scale to test balance impairments in the elderly ,takes time to perform and has been found to have scoring inconsistencies. The purpose was to determine if individual items or a group of BBS items would have better accuracy than the total BBS in classifying community dwelling elderly individuals according to fall history. Method: 60 community dwelling elderly individuals were chosen based on a history of falls in this cross sectional study. Each BBS item was dichotomized at three points along the scoring scale of 0 – 4: between scores of 1 and 2, 2 and 3, and 3 and 4. Sensitivity (Sn, specificity (Sp, and positive (+LR and negative (-LR likelihood ratios were calculated for all items for each scoring dichotomy based on their accuracy in classifying subjects with a history of multiple falls. These findings were compared with the total BBS score where the cut-off score was derived from receiver operating characteristic curve analysis. Results: On analysing a combination of BBS items, B9 and B11 were found to have the best sensitivity and specificity when considered together. However the area under the curve of these items was 0.799 which did not match that of the total score (AUC= 0.837. A, combination of 4 BBS items - B9 B11 B12 and B13 also had good Sn and Sp but the AUC was 0.815. The combination with the AUC closest to that of the total score was a combination items B11 and B13. (AUC= 0.824. hence these two items can be used as the best predictor of falls with a cut off of 6.5 The ROC curve of the Total Berg balance Scale scores revealed a cut off score of 48.5. Conclusion: This study showed that combination of items B11 and B13 may be best predictors of falls in

  13. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  14. Good validity and reliability of the forgotten joint score in evaluating the outcome of total knee arthroplasty

    DEFF Research Database (Denmark)

    Thomsen, Morten G; Latifi, Roshan; Kallemose, Thomas

    2016-01-01

    . We investigated the validity and reliability of the FJS. Patients and methods - A Danish version of the FJS questionnaire was created according to internationally accepted standards. 360 participants who underwent primary TKA were invited to participate in the study. Of these, 315 were included...... in a validity study and 150 in a reliability study. Correlation between the Oxford knee score (OKS) and the FJS was examined and test-retest evaluation was performed. A ceiling effect was defined as participants reaching a score within 15% of the maximum achievable score. Results - The validity study revealed...... of the FJS (ICC? 0.79). We found a high level of internal consistency (Cronbach's? = 0.96). The ceiling effect for the FJS was 16%, as compared to 37% for the OKS. Interpretation - The FJS showed good construct validity and test-retest reliability. It had a lower ceiling effect than the OKS. The FJS appears...

  15. Total ozone column derived from GOME and SCIAMACHY using KNMI retrieval algorithms: Validation against Brewer measurements at the Iberian Peninsula

    Science.gov (United States)

    Antón, M.; Kroon, M.; López, M.; Vilaplana, J. M.; Bañón, M.; van der A, R.; Veefkind, J. P.; Stammes, P.; Alados-Arboledas, L.

    2011-11-01

    This article focuses on the validation of the total ozone column (TOC) data set acquired by the Global Ozone Monitoring Experiment (GOME) and the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) satellite remote sensing instruments using the Total Ozone Retrieval Scheme for the GOME Instrument Based on the Ozone Monitoring Instrument (TOGOMI) and Total Ozone Retrieval Scheme for the SCIAMACHY Instrument Based on the Ozone Monitoring Instrument (TOSOMI) retrieval algorithms developed by the Royal Netherlands Meteorological Institute. In this analysis, spatially colocated, daily averaged ground-based observations performed by five well-calibrated Brewer spectrophotometers at the Iberian Peninsula are used. The period of study runs from January 2004 to December 2009. The agreement between satellite and ground-based TOC data is excellent (R2 higher than 0.94). Nevertheless, the TOC data derived from both satellite instruments underestimate the ground-based data. On average, this underestimation is 1.1% for GOME and 1.3% for SCIAMACHY. The SCIAMACHY-Brewer TOC differences show a significant solar zenith angle (SZA) dependence which causes a systematic seasonal dependence. By contrast, GOME-Brewer TOC differences show no significant SZA dependence and hence no seasonality although processed with exactly the same algorithm. The satellite-Brewer TOC differences for the two satellite instruments show a clear and similar dependence on the viewing zenith angle under cloudy conditions. In addition, both the GOME-Brewer and SCIAMACHY-Brewer TOC differences reveal a very similar behavior with respect to the satellite cloud properties, being cloud fraction and cloud top pressure, which originate from the same cloud algorithm (Fast Retrieval Scheme for Clouds from the Oxygen A-Band (FRESCO+)) in both the TOSOMI and TOGOMI retrieval algorithms.

  16. External validation of the DHAKA score and comparison with the current IMCI algorithm for the assessment of dehydration in children with diarrhoea: a prospective cohort study.

    Science.gov (United States)

    Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Atika, Bita; Rege, Soham; Robertson, Sarah; Schmid, Christopher H; Alam, Nur H

    2016-10-01

    Dehydration due to diarrhoea is a leading cause of child death worldwide, yet no clinical tools for assessing dehydration have been validated in resource-limited settings. The Dehydration: Assessing Kids Accurately (DHAKA) score was derived for assessing dehydration in children with diarrhoea in a low-income country setting. In this study, we aimed to externally validate the DHAKA score in a new population of children and compare its accuracy and reliability to the current Integrated Management of Childhood Illness (IMCI) algorithm. DHAKA was a prospective cohort study done in children younger than 60 months presenting to the International Centre for Diarrhoeal Disease Research, Bangladesh, with acute diarrhoea (defined by WHO as three or more loose stools per day for less than 14 days). Local nurses assessed children and classified their dehydration status using both the DHAKA score and the IMCI algorithm. Serial weights were obtained and dehydration status was established by percentage weight change with rehydration. We did regression analyses to validate the DHAKA score and compared the accuracy and reliability of the DHAKA score and IMCI algorithm with receiver operator characteristic (ROC) curves and the weighted κ statistic. This study was registered with ClinicalTrials.gov, number NCT02007733. Between March 22, 2015, and May 15, 2015, 496 patients were included in our primary analyses. On the basis of our criterion standard, 242 (49%) of 496 children had no dehydration, 184 (37%) of 496 had some dehydration, and 70 (14%) of 496 had severe dehydration. In multivariable regression analyses, each 1-point increase in the DHAKA score predicted an increase of 0·6% in the percentage dehydration of the child and increased the odds of both some and severe dehydration by a factor of 1·4. Both the accuracy and reliability of the DHAKA score were significantly greater than those of the IMCI algorithm. The DHAKA score is the first clinical tool for assessing

  17. An Algorithmic Approach to Total Breast Reconstruction with Free Tissue Transfer

    Directory of Open Access Journals (Sweden)

    Seong Cheol Yu

    2013-05-01

    Full Text Available As microvascular techniques continue to improve, perforator flap free tissue transfer is now the gold standard for autologous breast reconstruction. Various options are available for breast reconstruction with autologous tissue. These include the free transverse rectus abdominis myocutaneous (TRAM flap, deep inferior epigastric perforator flap, superficial inferior epigastric artery flap, superior gluteal artery perforator flap, and transverse/vertical upper gracilis flap. In addition, pedicled flaps can be very successful in the right hands and the right patient, such as the pedicled TRAM flap, latissimus dorsi flap, and thoracodorsal artery perforator. Each flap comes with its own advantages and disadvantages related to tissue properties and donor-site morbidity. Currently, the problem is how to determine the most appropriate flap for a particular patient among those potential candidates. Based on a thorough review of the literature and accumulated experiences in the author’s institution, this article provides a logical approach to autologous breast reconstruction. The algorithms presented here can be helpful to customize breast reconstruction to individual patient needs.

  18. Pattern analysis of total item score and item response of the Kessler Screening Scale for Psychological Distress (K6 in a nationally representative sample of US adults

    Directory of Open Access Journals (Sweden)

    Shinichiro Tomitaka

    2017-02-01

    Full Text Available Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D. To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6 in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS, which comprises four subsamples: (1 a national random digit dialing (RDD sample, (2 oversamples from five metropolitan areas, (3 siblings of individuals from the RDD sample, and (4 a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales.

  19. Optical Algorithms at Satellite Wavelengths for Total Suspended Matter in Tropical Coastal Waters

    OpenAIRE

    Ouillon, Sylvain; Douillet, Pascal; Petrenko, Anne; Neveux, Jacques; Dupouy, C?cile; Froidefond, Jean-Marie; Andr?fou?t, Serge; Mu?oz-Caravaca, Alain

    2008-01-01

    Is it possible to derive accurately Total Suspended Matter concentration or its proxy, turbidity, from remote sensing data in tropical coastal lagoon waters? To investigate this question, hyperspectral remote sensing reflectance, turbidity and chlorophyll pigment concentration were measured in three coral reef lagoons. The three sites enabled us to get data over very diverse environments: oligotrophic and sediment-poor waters in the southwest lagoon of New Caledonia, eutrophic waters in the C...

  20. Evaluating Carbonate System Algorithms in a Nearshore System: Does Total Alkalinity Matter?

    Science.gov (United States)

    Jones, Jonathan M; Sweet, Julia; Brzezinski, Mark A; McNair, Heather M; Passow, Uta

    2016-01-01

    Ocean acidification is a threat to many marine organisms, especially those that use calcium carbonate to form their shells and skeletons. The ability to accurately measure the carbonate system is the first step in characterizing the drivers behind this threat. Due to logistical realities, regular carbonate system sampling is not possible in many nearshore ocean habitats, particularly in remote, difficult-to-access locations. The ability to autonomously measure the carbonate system in situ relieves many of the logistical challenges; however, it is not always possible to measure the two required carbonate parameters autonomously. Observed relationships between sea surface salinity and total alkalinity can frequently provide a second carbonate parameter thus allowing for the calculation of the entire carbonate system. Here, we assessed the rigor of estimating total alkalinity from salinity at a depth sampling water from a pier in southern California for several carbonate system parameters. Carbonate system parameters based on measured values were compared with those based on estimated TA values. Total alkalinity was not predictable from salinity or from a combination of salinity and temperature at this site. However, dissolved inorganic carbon and the calcium carbonate saturation state of these nearshore surface waters could both be estimated within on average 5% of measured values using measured pH and salinity-derived or regionally averaged total alkalinity. Thus we find that the autonomous measurement of pH and salinity can be used to monitor trends in coastal changes in DIC and saturation state and be a useful method for high-frequency, long-term monitoring of ocean acidification.

  1. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  2. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  3. Validation of the total dysphagia risk score (TDRS) in head and neck cancer patients in a conventional and a partially accelerated radiotherapy scheme

    NARCIS (Netherlands)

    Nevens, Daan; Deschuymer, Sarah; Langendijk, Johannes A.; Daisne, Jean -Francois; Duprez, Frederic; De Neve, Wilfried; Nuyts, Sandra

    Background and purpose: A risk model, the total dysphagia risk score (TDRS), was developed to predict which patients are most at risk to develop grade >= 2 dysphagia at 6 months following radiotherapy (RT) for head and neck cancer. The purpose of this study was to validate this model at 6 months and

  4. Pharmacokinetic-pharmacodynamic modeling of antipsychotic drugs in patients with schizophrenia Part I : The use of PANSS total score and clinical utility

    NARCIS (Netherlands)

    Reddy, Venkatesh Pilla; Kozielska, Magdalena; Suleiman, Ahmed Abbas; Johnson, Martin; Vermeulen, An; Liu, Jing; de Greef, Rik; Groothuis, Geny M. M.; Danhof, Meindert; Proost, Johannes H.

    Background: To develop a pharmacokinetic-pharmacodynamic (PK-PD) model using individual-level data of Positive and Negative Syndrome Scale (PANSS) total score to characterize the antipsychotic drug effect taking into account the placebo effect and dropout rate. In addition, a clinical utility (CU)

  5. Achilles tendon Total Rupture Score at 3 months can predict patients' ability to return to sport 1 year after injury

    DEFF Research Database (Denmark)

    Hansen, Maria Swennergren; Christensen, Marianne; Budolfsen, Thomas

    2016-01-01

    PURPOSE: To investigate how the Achilles tendon Total Rupture Score (ATRS) at 3 months and 1 year after injury is associated with a patient's ability to return to work and sports as well as to investigate whether sex and age influence ATRS after 3 months and 1 year. METHOD: This is a retrospectiv...

  6. Complex versus Simple Modeling for DIF Detection: When the Intraclass Correlation Coefficient (?) of the Studied Item Is Less Than the ? of the Total Score

    Science.gov (United States)

    Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon

    2014-01-01

    Previous research has demonstrated that differential item functioning (DIF) methods that do not account for multilevel data structure could result in too frequent rejection of the null hypothesis (i.e., no DIF) when the intraclass correlation coefficient (?) of the studied item was the same as the ? of the total score. The current study extended…

  7. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  8. Dual Kidney Allocation Score: A Novel Algorithm Utilizing Expanded Donor Criteria for the Allocation of Dual Kidneys in Adults.

    Science.gov (United States)

    Johnson, Adam P; Price, Thea P; Lieby, Benjamin; Doria, Cataldo

    2016-09-08

    BACKGROUND Dual kidney transplantation (DKT) of expanded-criteria donors is a cost-intensive procedure that aims to increase the pool of available deceased organ donors and has demonstrated equivalent outcomes to expanded-criteria single kidney transplantation (eSKT). The objective of this study was to develop an allocation score based on predicted graft survival from historical dual and single kidney donors. MATERIAL AND METHODS We analyzed United Network for Organ Sharing (UNOS) data for 1547 DKT and 26 381 eSKT performed between January 1994 and September 2013. We utilized multivariable Cox regression to identify variables independently associated with graft survival in dual and single kidney transplantations. We then derived a weighted multivariable product score from calculated hazard ratios to model the benefit of transplantation as dual kidneys. RESULTS Of 36 donor variables known at the time of listing, 13 were significantly associated with graft survival. The derived dual allocation score demonstrated good internal validity with strong correlation to improved survival in dual kidney transplants. Donors with scores less than 2.1 transplanted as dual kidneys had a worsened median survival of 594 days (24%, p-value 0.031) and donors with scores greater than 3.9 had improved median survival of 1107 days (71%, p-value 0.002). There were 17 733 eSKT (67%) and 1051 DKT (67%) with scores in between these values and no differences in survival (p-values 0.676 and 0.185). CONCLUSIONS We have derived a dual kidney allocation score (DKAS) with good internal validity. Future prospective studies will be required to demonstrate external validity, but this score may help to standardize organ allocation for dual kidney transplantation.

  9. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    Science.gov (United States)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  10. A Novel Ant Colony Algorithm for the Single-Machine Total Weighted Tardiness Problem with Sequence Dependent Setup Times

    Directory of Open Access Journals (Sweden)

    Fardin Ahmadizar

    2011-08-01

    Full Text Available This paper deals with the NP-hard single-machine total weighted tardiness problem with sequence dependent setup times. Incorporating fuzzy sets and genetic operators, a novel ant colony optimization algorithm is developed for the problem. In the proposed algorithm, artificial ants construct solutions as orders of jobs based on the heuristic information as well as pheromone trails. To calculate the heuristic information, three well-known priority rules are adopted as fuzzy sets and then aggregated. When all artificial ants have terminated their constructions, genetic operators such as crossover and mutation are applied to generate new regions of the solution space. A local search is then performed to improve the performance quality of some of the solutions found. Moreover, at run-time the pheromone trails are locally as well as globally updated, and limited between lower and upper bounds. The proposed algorithm is experimented on a set of benchmark problems from the literature and compared with other metaheuristics.

  11. Effect of Antihypertensive Therapy on SCORE-Estimated Total Cardiovascular Risk: Results from an Open-Label, Multinational Investigation—The POWER Survey

    Directory of Open Access Journals (Sweden)

    Guy De Backer

    2013-01-01

    Full Text Available Background. High blood pressure is a substantial risk factor for cardiovascular disease. Design & Methods. The Physicians' Observational Work on patient Education according to their vascular Risk (POWER survey was an open-label investigation of eprosartan-based therapy (EBT for control of high blood pressure in primary care centers in 16 countries. A prespecified element of this research was appraisal of the impact of EBT on estimated 10-year risk of a fatal cardiovascular event as determined by the Systematic Coronary Risk Evaluation (SCORE model. Results. SCORE estimates of CVD risk were obtained at baseline from 12,718 patients in 15 countries (6504 men and from 9577 patients at 6 months. During EBT mean (±SD systolic/diastolic blood pressures declined from 160.2 ± 13.7/94.1 ± 9.1 mmHg to 134.5 ± 11.2/81.4 ± 7.4 mmHg. This was accompanied by a 38% reduction in mean SCORE-estimated CVD risk and an improvement in SCORE risk classification of one category or more in 3506 patients (36.6%. Conclusion. Experience in POWER affirms that (a effective pharmacological control of blood pressure is feasible in the primary care setting and is accompanied by a reduction in total CVD risk and (b the SCORE instrument is effective in this setting for the monitoring of total CVD risk.

  12. Effect of Antihypertensive Therapy on SCORE-Estimated Total Cardiovascular Risk: Results from an Open-Label, Multinational Investigation—The POWER Survey

    Science.gov (United States)

    De Backer, Guy; Petrella, Robert J.; Goudev, Assen R.; Radaideh, Ghazi Ahmad; Rynkiewicz, Andrzej; Pathak, Atul

    2013-01-01

    Background. High blood pressure is a substantial risk factor for cardiovascular disease. Design & Methods. The Physicians' Observational Work on patient Education according to their vascular Risk (POWER) survey was an open-label investigation of eprosartan-based therapy (EBT) for control of high blood pressure in primary care centers in 16 countries. A prespecified element of this research was appraisal of the impact of EBT on estimated 10-year risk of a fatal cardiovascular event as determined by the Systematic Coronary Risk Evaluation (SCORE) model. Results. SCORE estimates of CVD risk were obtained at baseline from 12,718 patients in 15 countries (6504 men) and from 9577 patients at 6 months. During EBT mean (±SD) systolic/diastolic blood pressures declined from 160.2 ± 13.7/94.1 ± 9.1 mmHg to 134.5 ± 11.2/81.4 ± 7.4 mmHg. This was accompanied by a 38% reduction in mean SCORE-estimated CVD risk and an improvement in SCORE risk classification of one category or more in 3506 patients (36.6%). Conclusion. Experience in POWER affirms that (a) effective pharmacological control of blood pressure is feasible in the primary care setting and is accompanied by a reduction in total CVD risk and (b) the SCORE instrument is effective in this setting for the monitoring of total CVD risk. PMID:23997946

  13. Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm

    International Nuclear Information System (INIS)

    Min, Jong Hwan

    2011-02-01

    Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use in other application such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical dual-energy calibration method was used to prepare material-specific projection data. Raw data at high and low tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data using the coefficients obtained through the calibration process. From much fewer views than are conventionally used, material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only and acryl-only images were successfully decomposed. We evaluated the quality of the reconstructed images by use of contrast-to-noise ratio and detectability. A low-dose dual-energy CBCT can be realized via the proposed method by greatly reducing the number of projections

  14. Standardized Total Average Toxicity Score: A Scale- and Grade-Independent Measure of Late Radiotherapy Toxicity to Facilitate Pooling of Data From Different Studies

    Energy Technology Data Exchange (ETDEWEB)

    Barnett, Gillian C., E-mail: gillbarnett@doctors.org.uk [University of Cambridge Department of Oncology, Oncology Centre, Cambridge (United Kingdom); Cancer Research-UK Centre for Genetic Epidemiology and Department of Oncology, Strangeways Research Laboratories, Cambridge (United Kingdom); West, Catharine M.L. [School of Cancer and Enabling Sciences, Manchester Academic Health Science Centre, University of Manchester, Christie Hospital, Manchester (United Kingdom); Coles, Charlotte E. [University of Cambridge Department of Oncology, Oncology Centre, Cambridge (United Kingdom); Pharoah, Paul D.P. [Cancer Research-UK Centre for Genetic Epidemiology and Department of Oncology, Strangeways Research Laboratories, Cambridge (United Kingdom); Talbot, Christopher J. [Department of Genetics, University of Leicester, Leicester (United Kingdom); Elliott, Rebecca M. [School of Cancer and Enabling Sciences, Manchester Academic Health Science Centre, University of Manchester, Christie Hospital, Manchester (United Kingdom); Tanteles, George A. [Department of Clinical Genetics, University Hospitals of Leicester, Leicester (United Kingdom); Symonds, R. Paul [Department of Cancer Studies and Molecular Medicine, University Hospitals of Leicester, Leicester (United Kingdom); Wilkinson, Jennifer S. [University of Cambridge Department of Oncology, Oncology Centre, Cambridge (United Kingdom); Dunning, Alison M. [Cancer Research-UK Centre for Genetic Epidemiology and Department of Oncology, Strangeways Research Laboratories, Cambridge (United Kingdom); Burnet, Neil G. [University of Cambridge Department of Oncology, Oncology Centre, Cambridge (United Kingdom); Bentzen, Soren M. [University of Wisconsin, School of Medicine and Public Health, Department of Human Oncology, Madison, WI (United States)

    2012-03-01

    Purpose: The search for clinical and biologic biomarkers associated with late radiotherapy toxicity is hindered by the use of multiple and different endpoints from a variety of scoring systems, hampering comparisons across studies and pooling of data. We propose a novel metric, the Standardized Total Average Toxicity (STAT) score, to try to overcome these difficulties. Methods and Materials: STAT scores were derived for 1010 patients from the Cambridge breast intensity-modulated radiotherapy trial and 493 women from University Hospitals of Leicester. The sensitivity of the STAT score to detect differences between patient groups, stratified by factors known to influence late toxicity, was compared with that of individual endpoints. Analysis of residuals was used to quantify the effect of these covariates. Results: In the Cambridge cohort, STAT scores detected differences (p < 0.00005) between patients attributable to breast volume, surgical specimen weight, dosimetry, acute toxicity, radiation boost to tumor bed, postoperative infection, and smoking (p < 0.0002), with no loss of sensitivity over individual toxicity endpoints. Diabetes (p = 0.017), poor postoperative surgical cosmesis (p = 0.0036), use of chemotherapy (p = 0.0054), and increasing age (p = 0.041) were also associated with increased STAT score. When the Cambridge and Leicester datasets were combined, STAT was associated with smoking status (p < 0.00005), diabetes (p = 0.041), chemotherapy (p = 0.0008), and radiotherapy boost (p = 0.0001). STAT was independent of the toxicity scale used and was able to deal with missing data. There were correlations between residuals of the STAT score obtained using different toxicity scales (r > 0.86, p < 0.00005 for both datasets). Conclusions: The STAT score may be used to facilitate the analysis of overall late radiation toxicity, from multiple trials or centers, in studies of possible genetic and nongenetic determinants of radiotherapy toxicity.

  15. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  16. Near infrared system coupled chemometric algorithms for enumeration of total fungi count in cocoa beans neat solution.

    Science.gov (United States)

    Kutsanedzie, Felix Y H; Chen, Quansheng; Hassan, Md Mehedi; Yang, Mingxiu; Sun, Hao; Rahman, Md Hafizur

    2018-02-01

    Total fungi count (TFC) is a quality indicator of cocoa beans when unmonitored leads to quality and safety problems. Fourier transform near infrared spectroscopy (FT-NIRS) combined with chemometric algorithms like partial least square (PLS); synergy interval-PLS (Si-PLS); synergy interval-genetic algorithm-PLS (Si-GAPLS); Ant colony optimization - PLS (ACO-PLS) and competitive-adaptive reweighted sampling-PLS (CARS-PLS) was employed to predict TFC in cocoa beans neat solution. Model results were evaluated using the correlation coefficients of the prediction (Rp) and calibration (Rc); root mean square error of prediction (RMSEP), and the ratio of sample standard deviation to RMSEP (RPD). The developed models performance yielded 0.951≤Rp≤0.975; and 3.15≤RPD≤4.32. The models' prediction stability improved in the order of PLS

  17. Normed kernel function-based fuzzy possibilistic C-means (NKFPCM) algorithm for high-dimensional breast cancer database classification with feature selection is based on Laplacian Score

    Science.gov (United States)

    Lestari, A. W.; Rustam, Z.

    2017-07-01

    In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.

  18. Quantifying the impact of using Coronary Artery Calcium Score for risk categorization instead of Framingham Score or European Heart SCORE in lipid lowering algorithms in a Middle Eastern population.

    Science.gov (United States)

    Isma'eel, Hussain A; Almedawar, Mohamad M; Harbieh, Bernard; Alajaji, Wissam; Al-Shaar, Laila; Hourani, Mukbil; El-Merhi, Fadi; Alam, Samir; Abchee, Antoine

    2015-10-01

    The use of the Coronary Artery Calcium Score (CACS) for risk categorization instead of the Framingham Risk Score (FRS) or European Heart SCORE (EHS) to improve classification of individuals is well documented. However, the impact of reclassifying individuals using CACS on initiating lipid lowering therapy is not well understood. We aimed to determine the percentage of individuals not requiring lipid lowering therapy as per the FRS and EHS models but are found to require it using CACS and vice versa; and to determine the level of agreement between CACS, FRS and EHS based models. Data was collected for 500 consecutive patients who had already undergone CACS. However, only 242 patients met the inclusion criteria and were included in the analysis. Risk stratification comparisons were conducted according to CACS, FRS, and EHS, and the agreement (Kappa) between them was calculated. In accordance with the models, 79.7% to 81.5% of high-risk individuals were down-classified by CACS, while 6.8% to 7.6% of individuals at intermediate risk were up-classified to high risk by CACS, with slight to moderate agreement. Moreover, CACS recommended treatment to 5.7% and 5.8% of subjects untreated according to European and Canadian guidelines, respectively; whereas 75.2% to 81.2% of those treated in line with the guidelines would not be treated based on CACS. In this simulation, using CACS for risk categorization warrants lipid lowering treatment for 5-6% and spares 70-80% from treatment in accordance with the guidelines. Current strong evidence from double randomized clinical trials is in support of guideline recommendations. Our results call for a prospective trial to explore the benefits/risks of a CACS-based approach before any recommendations can be made.

  19. Intra- and inter-rater reliability of the Knee Society Knee Score when used by two physiotherapists in patients post total knee arthroplasty

    Directory of Open Access Journals (Sweden)

    S. Gopal

    2010-01-01

    Full Text Available Background and Purpose: It has yet to be shown whether routine physiotherapy plays a role in the rehabilitation of patients post totalknee arthroplasty (Rajan et al 2004. Physiotherapists should be using validoutcome measures to provide evidence of the benefit of their intervention. The aim of this study was to establish the intra and inter-rater reliability of the Knee Society Knee Score, a scoring system developed by Insall et al(1989. The Knee Society Knee Score can be used to assess the integrity of theknee joint of patients undergoing total knee arthroplasty. Since the scoreinvolves clinical testing, the intra-rater reliability of the clinician should be established prior to using the scores as datain clinical research. W here multiple clinicians are involved, inter-rater reliability should also be established.Design: This was a correlation study.Subjects: A  sample of thirty patients post total knee arthroplasty attending the arthroplasty clinic at Johannesburg Hospital between six weeks and twelve months postoperatively.M ethod: Recruited patients were evaluated twice with a time interval of one hour between each assessment. Statistical A nalysis: The intra- and inter-rater reliability were estimated using Intraclass Correlation Coefficient (ICC. R esults: The intra-rater reliability showed excellent reliability (h= 0.95 for Examiner A  and good reliability (h= 0.71for Examiner B. The inter-rater reliability showed moderate reliability (h= 0.67 during test one and h= 0.66 during test two.Conclusion: The KSKS has good intra-rater reliability when tested within a period of one hour. The KSKS demonstrated moderate agreement for inter rater reliability.

  20. A 15-year review of midface reconstruction after total and subtotal maxillectomy: part I. Algorithm and outcomes.

    Science.gov (United States)

    Cordeiro, Peter G; Chen, Constance M

    2012-01-01

    Reconstruction of complex midfacial defects is best approached with a clear algorithm. The goals of reconstruction are functional and aesthetic. Over a 15-year period (1992 to 2006), a single surgeon (P.G.C.) performed 100 flaps to reconstruct the following midfacial defects: type I, limited maxillectomy (n = 20); type IIA, subtotal maxillectomy with resection of less than 50 percent of the palate (n = 8); type IIB, subtotal maxillectomy with resection of greater than 50 percent of the palate (n = 8); type IIIA, total maxillectomy with preservation of the orbital contents (n = 22); type IIIB, total maxillectomy with orbital exenteration (n = 23); and type IV, orbitomaxillectomy (n = 19). Free flaps were used in 94 cases (94 percent), and pedicled flaps were used in six (6 percent). One hundred flaps were performed in 96 patients (69 males, 72 percent; 27 females, 28 percent); four patients underwent a second flap reconstruction due to recurrent disease (n = 4, 4 percent). Average patient age was 49.2 years (range, 13 to 81 years). Free-flap survival was 100 percent, with one partial flap loss (1 percent). Five patients suffered systemic complications (5.2 percent), and four died within 30 days of hospitalization (4.2 percent). Over 50 percent of patients returned to normal diet and speech. Almost 60 percent were judged to have an excellent aesthetic result. Free-tissue transfer offers the most effective and reliable form of reconstruction for complex maxillectomy defects. Rectus abdominis and radial forearm free flaps in combination with immediate bone grafting or as osteocutaneous flaps consistently provide the best functional and aesthetic results. Therapeutic, IV.

  1. An empirical study using range of motion and pain score as determinants for continuous passive motion: outcomes following total knee replacement surgery in an adult population.

    Science.gov (United States)

    Tabor, Danielle

    2013-01-01

    The continuous passive motion (CPM) machine is one means by which to rehabilitate the knee after total knee replacement surgery. This study sought to determine which total knee replacement patients, if any, benefit from the use of the CPM machine. For the study period, most patients received active physical therapy. Patients were placed in the CPM machine if, on postoperative day 1, they had a range of motion less than or equal to 45° and/or pain score of 8 or greater on a numeric rating scale of 0-10, 0 being no pain and 10 being the worst pain. Both groups of patients healed at similar rates. The incidence of adverse events, length of stay, and functional outcomes was comparable between groups. Given the demonstrated lack of relative benefit to the patient and the cost of the CPM, this study supported discontinuing the routine use of the CPM.

  2. Test-retest reliability at the item level and total score level of the Norwegian version of the Spinal Cord Injury Falls Concern Scale (SCI-FCS).

    Science.gov (United States)

    Roaldsen, Kirsti Skavberg; Måøy, Åsa Blad; Jørgensen, Vivien; Stanghelle, Johan Kvalvik

    2016-05-01

    Translation of the Spinal Cord Injury Falls Concern Scale (SCI-FCS), and investigation of test-retest reliability on item-level and total-score-level. Translation, adaptation and test-retest study. A specialized rehabilitation setting in Norway. Fifty-four wheelchair users with a spinal cord injury. The median age of the cohort was 49 years, and the median number of years after injury was 13. Interventions/measurements: The SCI-FCS was translated and back-translated according to guidelines. Individuals answered the SCI-FCS twice over the course of one week. We investigated item-level test-retest reliability using Svensson's rank-based statistical method for disagreement analysis of paired ordinal data. For relative reliability, we analyzed the total-score-level test-retest reliability with intraclass correlation coefficients (ICC2.1), the standard error of measurement (SEM), and the smallest detectable change (SDC) for absolute reliability/measurement-error assessment and Cronbach's alpha for internal consistency. All items showed satisfactory percentage agreement (≥69%) between test and retest. There were small but non-negligible systematic disagreements among three items; we recovered an 11-13% higher chance for a lower second score. There was no disagreement due to random variance. The test-retest agreement (ICC2.1) was excellent (0.83). The SEM was 2.6 (12%), and the SDC was 7.1 (32%). The Cronbach's alpha was high (0.88). The Norwegian SCI-FCS is highly reliable for wheelchair users with chronic spinal cord injuries.

  3. Cross-cultural adaptation and validation of the Japanese version of the new Knee Society Scoring System for osteoarthritic knee with total knee arthroplasty.

    Science.gov (United States)

    Hamamoto, Yosuke; Ito, Hiromu; Furu, Moritoshi; Ishikawa, Masahiro; Azukizawa, Masayuki; Kuriyama, Shinichi; Nakamura, Shinichiro; Matsuda, Shuichi

    2015-09-01

    The purposes of this study were to translate the new Knee Society Score (KSS) into Japanese and to evaluate the construct and content validity, test-retest reliability, and internal consistency of the Japanese version of the new KSS. The Japanese version of the KSS was developed according to cross-cultural guidelines by using the "translation-back translation" method to ensure content validity. KSS data were then obtained from patients who had undergone total knee arthroplasty (TKA). The psychometric properties evaluated were as follows: for feasibility, response rate, and floor and ceiling effects; for construct validity, internal consistency using Cronbach's alpha, and correlations with quality of life. Construct validity was evaluated by using Spearman's correlation coefficient to quantify the correlation between the KSS and the Japanese version of the Oxford 12-item Knee Score or Short Form 36 Health Survey (SF-36) questionnaires. The Japanese version of the KSS was sent to 93 consecutive osteoarthritic patients who underwent primary TKA in our institution. Fifty-five patients completed the questionnaires and were included in this study. Neither a floor nor ceiling effect was observed. The reliability proved excellent in the majority of domains, with intraclass correlation coefficients of 0.65-0.88. Internal consistency, assessed by Cronbach's alpha, was good to excellent for all domains (0.78-0.94). All of the four domains of the KSS correlated significantly with the Oxford 12-item Knee Score. The activity and satisfaction domains of the KSS correlated significantly with all and the majority of subscales of the SF-36, respectively, whereas symptoms and expectation domains showed significant correlations only with bodily pain and vitality subscales and with the physical function, bodily pain, and vitality subscales, respectively. The Japanese version of the new KSS is a valid, reliable, and responsive instrument to capture subjective aspects of the functional

  4. Comparisons of American, Israeli, Italian and Mexican physicians and nurses on the total and factor scores of the Jefferson scale of attitudes toward physician-nurse collaborative relationships.

    Science.gov (United States)

    Hojat, Mohammadreza; Gonnella, Joseph S; Nasca, Thomas J; Fields, Sylvia K; Cicchetti, Americo; Lo Scalzo, Alessandra; Taroni, Francesco; Amicosante, Anna Maria Vincenza; Macinati, Manuela; Tangucci, Massimo; Liva, Carlo; Ricciardi, Gualtiero; Eidelman, Shmuel; Admi, Hanna; Geva, Hana; Mashiach, Tanya; Alroy, Gideon; Alcorta-Gonzalez, Adelina; Ibarra, David; Torres-Ruiz, Antonio

    2003-05-01

    This cross-cultural study was designed to compare the attitudes of physicians and nurses toward physician-nurse collaboration in the United States, Israel, Italy and Mexico. Total participants were 2522 physicians and nurses who completed the Jefferson Scale of Attitudes Toward Physician-Nurse Collaboration (15 Likert-type items, (Hojat et al., Evaluation and the Health Professions 22 (1999a) 208; Nursing Research 50 (2001) 123). They were compared on the total scores and four factors of the Jefferson Scale (shared education and team work, caring as opposed to curing, nurses, autonomy, physicians' dominance). Results showed inter- and intra-cultural similarities and differences among the study groups providing support for the social role theory (Hardy and Conway, Role Theory: Perspectives for Health Professionals, Appelton-Century-Crofts, New York, 1978) and the principle of least interest (Waller and Hill, The Family: A Dynamic Interpretation, Dryden, New York, 1951) in inter-professional relationships. Implications for promoting physician-nurse education and inter-professional collaboration are discussed.

  5. Associations between preoperative Oxford hip and knee scores and costs and quality of life of patients undergoing primary total joint replacement in the NHS England: an observational study.

    Science.gov (United States)

    Eibich, Peter; Dakin, Helen A; Price, Andrew James; Beard, David; Arden, Nigel K; Gray, Alastair M

    2018-04-10

    To assess how costs and quality of life (measured by EuroQoL-5 Dimensions (EQ-5D)) before and after total hip replacement (THR) and total knee replacement (TKR) vary with age, gender and preoperative Oxford hip score (OHS) and Oxford knee score (OKS). Regression analyses using prospectively collected data from clinical trials, cohort studies and administrative data bases. UK secondary care. Men and women undergoing primary THR or TKR. The Hospital Episode Statistics data linked to patient-reported outcome measures included 602 176 patients undergoing hip or knee replacement who were followed up for up to 6 years. The Knee Arthroplasty Trial included 2217 patients undergoing TKR who were followed up for 12 years. The Clinical Outcomes in Arthroplasty Study cohort included 806 patients undergoing THR and 484 patients undergoing TKR who were observed for 1 year. EQ-5D-3L quality of life before and after surgery, costs of primary arthroplasty, costs of revision arthroplasty and the costs of hospital readmissions and ambulatory costs in the year before and up to 12 years after joint replacement. Average postoperative utility for patients at the 5th percentile of the OHS/OKS distribution was 0.61/0.5 for THR/TKR and 0.89/0.85 for patients at the 95th percentile. The difference between postoperative and preoperative EQ-5D utility was highest for patients with preoperative OHS/OKS lower than 10. However, postoperative EQ-5D utility was higher than preoperative utility for all patients with OHS≤46 and those with OKS≤44. In contrast, costs were generally higher for patients with low preoperative OHS/OKS than those with high OHS/OKS. For example, costs of hospital readmissions within 12 months after primary THR/TKR were £740/£888 for patients at the 5th percentile compared with £314/£404 at the 95th percentile of the OHS/OKS distribution. Our findings suggest that costs and quality of life associated with total joint replacement vary systematically with

  6. Effects of aggregation of drug and diagnostic codes on the performance of the high-dimensional propensity score algorithm: an empirical example.

    Science.gov (United States)

    Le, Hoa V; Poole, Charles; Brookhart, M Alan; Schoenbach, Victor J; Beach, Kathleen J; Layton, J Bradley; Stürmer, Til

    2013-11-19

    The High-Dimensional Propensity Score (hd-PS) algorithm can select and adjust for baseline confounders of treatment-outcome associations in pharmacoepidemiologic studies that use healthcare claims data. How hd-PS performance is affected by aggregating medications or medical diagnoses has not been assessed. We evaluated the effects of aggregating medications or diagnoses on hd-PS performance in an empirical example using resampled cohorts with small sample size, rare outcome incidence, or low exposure prevalence. In a cohort study comparing the risk of upper gastrointestinal complications in celecoxib or traditional NSAIDs (diclofenac, ibuprofen) initiators with rheumatoid arthritis and osteoarthritis, we (1) aggregated medications and International Classification of Diseases-9 (ICD-9) diagnoses into hierarchies of the Anatomical Therapeutic Chemical classification (ATC) and the Clinical Classification Software (CCS), respectively, and (2) sampled the full cohort using techniques validated by simulations to create 9,600 samples to compare 16 aggregation scenarios across 50% and 20% samples with varying outcome incidence and exposure prevalence. We applied hd-PS to estimate relative risks (RR) using 5 dimensions, predefined confounders, ≤ 500 hd-PS covariates, and propensity score deciles. For each scenario, we calculated: (1) the geometric mean RR; (2) the difference between the scenario mean ln(RR) and the ln(RR) from published randomized controlled trials (RCT); and (3) the proportional difference in the degree of estimated confounding between that scenario and the base scenario (no aggregation). Compared with the base scenario, aggregations of medications into ATC level 4 alone or in combination with aggregation of diagnoses into CCS level 1 improved the hd-PS confounding adjustment in most scenarios, reducing residual confounding compared with the RCT findings by up to 19%. Aggregation of codes using hierarchical coding systems may improve the performance of

  7. Prosthetic alignment after total knee replacement is not associated with dissatisfaction or change in Oxford Knee Score: A multivariable regression analysis.

    Science.gov (United States)

    Huijbregts, Henricus J T A M; Khan, Riaz J K; Fick, Daniel P; Jarrett, Olivia M; Haebich, Samantha

    2016-06-01

    Approximately 18% of the patients are dissatisfied with the result of total knee replacement. However, the relation between dissatisfaction and prosthetic alignment has not been investigated before. We retrospectively analysed prospectively gathered data of all patients who had a primary TKR, preoperative and one-year postoperative Oxford Knee Scores (OKS) and postoperative computed tomography (CT). The CT protocol measures hip-knee-ankle (HKA) angle, and coronal, sagittal and axial component alignment. Satisfaction was defined using a five-item Likert scale. We dichotomised dissatisfaction by combining '(very) dissatisfied' and 'neutral/not sure'. Associations with dissatisfaction and change in OKS were calculated using multivariable logistic and linear regression models. 230 TKRs were implanted in 105 men and 106 women. At one year, 12% were (very) dissatisfied and 10% neutral. Coronal alignment of the femoral component was 0.5 degrees more accurate in patients who were satisfied at one year. The other alignment measurements were not different between satisfied and dissatisfied patients. All radiographic measurements had a P-value>0.10 on univariate analyses. At one year, dissatisfaction was associated with the three-months OKS. Change in OKS was associated with three-months OKS, preoperative physical SF-12, preoperative pain and cruciate retaining design. Neither mechanical axis, nor component alignment, is associated with dissatisfaction at one year following TKR. Patients get the best outcome when pain reduction and function improvement are optimal during the first three months and when the indication to embark on surgery is based on physical limitations rather than on a high pain score. 2. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Continued Inpatient Care After Primary Total Knee Arthroplasty Increases 30-Day Post-Discharge Complications: A Propensity Score-Adjusted Analysis.

    Science.gov (United States)

    McLawhorn, Alexander S; Fu, Michael C; Schairer, William W; Sculco, Peter K; MacLean, Catherine H; Padgett, Douglas E

    2017-09-01

    Discharge destination, either home or skilled care facility, after total knee arthroplasty (TKA) may be associated with significant variation in postacute care outcomes. The purpose of this study was to characterize the 30-day postdischarge outcomes after primary TKA relative to discharge destination. All primary unilateral TKAs performed for osteoarthritis from 2011-2014 were identified in the National Surgical Quality Improvement Program database. Propensity scores based on predischarge characteristics were used to adjust for selection bias in discharge destination. Propensity-adjusted multivariable logistic regressions were used to examine associations between discharge destination and postdischarge complications. Among 101,256 primary TKAs identified, 70,628 were discharged home and 30,628 to skilled care facilities. Patients discharged to facilities were more frequently were female, older, higher body mass index class, higher Charlson comorbidity index and American Society of Anesthesiologists scores, had predischarge complications, received general anesthesia, and classified as nonindependent preoperatively. Propensity adjustment accounted for this selection bias. Patients discharged to skilled care facilities after TKA had higher odds of any major complication (odds ratio = 1.25; 95% confidence interval, 1.13-1.37) and readmission (odds ratio = 1.81; 95% confidence interval, 1.50-2.18). Skilled care was associated with increased odds for respiratory, septic, thromboembolic, and urinary complications. Associations with death, cardiac, and wound complications were not significant. After controlling for predischarge characteristics, discharge to skilled care facilities vs home after primary TKA is associated with higher odds of numerous complications and unplanned readmission. These results support coordination of care pathways to facilitate home discharge after hospitalization for TKA whenever possible. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Preliminary application of a novel algorithm to monitor changes in pre-flight total peripheral resistance for prediction of post-flight orthostatic intolerance in astronauts

    Science.gov (United States)

    Arai, Tatsuya; Lee, Kichang; Stenger, Michael B.; Platts, Steven H.; Meck, Janice V.; Cohen, Richard J.

    2011-04-01

    Orthostatic intolerance (OI) is a significant challenge for astronauts after long-duration spaceflight. Depending on flight duration, 20-80% of astronauts suffer from post-flight OI, which is associated with reduced vascular resistance. This paper introduces a novel algorithm for continuously monitoring changes in total peripheral resistance (TPR) by processing the peripheral arterial blood pressure (ABP). To validate, we applied our novel mathematical algorithm to the pre-flight ABP data previously recorded from twelve astronauts ten days before launch. The TPR changes were calculated by our algorithm and compared with the TPR value estimated using cardiac output/heart rate before and after phenylephrine administration. The astronauts in the post-flight presyncopal group had lower pre-flight TPR changes (1.66 times) than those in the non-presyncopal group (2.15 times). The trend in TPR changes calculated with our algorithm agreed with the TPR trend calculated using measured cardiac output in the previous study. Further data collection and algorithm refinement are needed for pre-flight detection of OI and monitoring of continuous TPR by analysis of peripheral arterial blood pressure.

  10. Association between diet-quality scores, adiposity, total cholesterol and markers of nutritional status in european adults: Findings from the Food4Me study

    NARCIS (Netherlands)

    Fallaize, R.; Livingstone, K.M.; Celis-Morales, C.; Macready, A.L.; San-Cristobal, R.; Navas-Carretero, S.; Marsaux, C.F.M.; O’Donovan, C.B.; Kolossa, S.; Moschonis, G.; Walsh, M.C.; Gibney, E.R.; Brennan, L.; Bouwman, J.; Manios, Y.; Jarosz, M.; Martinez, J.A.; Daniel, H.; Saris, W.H.M.; Gundersen, T.E.; Drevon, C.A.; Gibney, M.J.; Mathers, J.C.; Lovegrove, J.A.

    2018-01-01

    Diet-quality scores (DQS), which are developed across the globe, are used to define adherence to specific eating patterns and have been associated with risk of coronary heart disease and type-II diabetes. We explored the association between five diet-quality scores (Healthy Eating Index, HEI;

  11. Prospective associations of C-reactive protein (CRP) levels and CRP genetic risk scores with risk of total knee and hip replacement for osteoarthritis in a diverse cohort.

    Science.gov (United States)

    Shadyab, A H; Terkeltaub, R; Kooperberg, C; Reiner, A; Eaton, C B; Jackson, R D; Krok-Schoen, J L; Salem, R M; LaCroix, A Z

    2018-05-22

    To examine associations of high-sensitivity C-reactive protein (CRP) levels and polygenic CRP genetic risk scores (GRS) with risk of end-stage hip or knee osteoarthritis (OA), defined as incident total hip (THR) or knee replacement (TKR) for OA. This study included a cohort of postmenopausal white, African American, and Hispanic women from the Women's Health Initiative. Women were followed from baseline to date of THR or TKR, death, or December 31, 2014. Medicare claims data identified THR and TKR. Hs-CRP and genotyping data were collected at baseline. Three CRP GRS were constructed: 1) a 4-SNP GRS comprised of genetic variants representing variation in the CRP gene among European populations; 2) a multilocus 18-SNP GRS of genetic variants significantly associated with CRP levels in a meta-analysis of genome-wide association studies; and 3) a 5-SNP GRS of genetic variants significantly associated with CRP levels among African American women. In analyses conducted separately among each race and ethnic group, there were no significant associations of ln hs-CRP with risk of THR or TKR, after adjusting for age, body mass index, lifestyle characteristics, chronic diseases, hormone therapy use, and non-steroidal anti-inflammatory drug use. CRP GRS were not associated with risk of THR or TKR in any ethnic group. Serum levels of ln hs-CRP and genetically-predicted CRP levels were not associated with risk of THR or TKR for OA among a diverse cohort of women. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  12. Order Batching in Warehouses by Minimizing Total Tardiness: A Hybrid Approach of Weighted Association Rule Mining and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Amir Hossein Azadnia

    2013-01-01

    Full Text Available One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.

  13. Comparison of turnaround time and total cost of HIV testing before and after implementation of the 2014 CDC/APHL Laboratory Testing Algorithm for diagnosis of HIV infection.

    Science.gov (United States)

    Chen, Derrick J; Yao, Joseph D

    2017-06-01

    Updated recommendations for HIV diagnostic laboratory testing published by the Centers for Disease Control and Prevention and the Association of Public Health Laboratories incorporate 4th generation HIV immunoassays, which are capable of identifying HIV infection prior to seroconversion. The purpose of this study was to compare turnaround time and cost between 3rd and 4th generation HIV immunoassay-based testing algorithms for initially reactive results. The clinical microbiology laboratory database at Mayo Clinic, Rochester, MN was queried for 3rd generation (from November 2012 to May 2014) and 4th generation (from May 2014 to November 2015) HIV immunoassay results. All results from downstream supplemental testing were recorded. Turnaround time (defined as the time of initial sample receipt in the laboratory to the time the final supplemental test in the algorithm was resulted) and cost (based on 2016 Medicare reimbursement rates) were assessed. A total of 76,454 and 78,998 initial tests were performed during the study period using the 3rd generation and 4th generation HIV immunoassays, respectively. There were 516 (0.7%) and 581 (0.7%) total initially reactive results, respectively. Of these, 304 (58.9%) and 457 (78.7%) were positive by supplemental testing. There were 10 (0.01%) cases of acute HIV infection identified with the 4th generation algorithm. The most frequent tests performed to confirm an HIV-positive case using the 3rd generation algorithm, which were reactive initial immunoassay and positive HIV-1 Western blot, took a median time of 1.1 days to complete at a cost of $45.00. In contrast, the most frequent tests performed to confirm an HIV-positive case using the 4th generation algorithm, which included a reactive initial immunoassay and positive HIV-1/-2 antibody differentiation immunoassay for HIV-1, took a median time of 0.4 days and cost $63.25. Overall median turnaround time was 2.2 and 1.5 days, and overall median cost was $63.90 and $72.50 for

  14. 完全图的点可区别强全染色算法%Strong Vertex-distinguishing Total Coloring Algorithm of Complete Graph

    Institute of Scientific and Technical Information of China (English)

    赵焕平; 刘平; 李敬文

    2012-01-01

    According to the definition of strong vertex-distinguishing total coloring, this paper combines with the symmetry of complete graph, proposes a new strong vertex-distinguishing total coloring algorithm. The algorithm divides the filled colors into two parts: overcolor and propercolor. At the premise of getting the coloring number and the coloring frequency, it uses colored at first to enhance its convergence. Experimental results show that this algorithm has a lower time complexity.%根据图的点可区别全染色的定义,结合完全图的对称性,提出一种新的点可区别强全染色算法.该算法将需要填充的颜色分为超色数和正常色数2个部分,在得到染色数量和染色次数的前提下,对超色数进行染色以增强算法收敛性.实验结果表明,该算法具有较低的时间复杂度.

  15. LOCAL ALGORITHM FOR MONITORING TOTAL SUSPENDED SEDIMENTS IN MICRO-WATERSHEDS USIN DRONES AND REMOTE SENSING APPLICATIONS. CASE STUDY: TEUSACÁ RIVER, LA CALERA, COLOMBIA

    Directory of Open Access Journals (Sweden)

    N. A. Sáenz

    2015-08-01

    Full Text Available An empirical relationship of Total Suspended Sediments (TSS concentrations and reflectance values obtained with Drones’ aerial photos and processed using remote sensing tools was set up as the main objective of this research. A local mathematic algorithm for the micro-watershed of the Teusacá River at La Calera, Colombia, was developed based on the computing of four component of bands from consumed-grade cameras obtaining from each their corresponding reflectance values from procedures for correcting digital camera imagery and using statistical analysis for study the fit and RMSE of 25 regressions. The assessment was characterized by the comparison of reflectance values and 34 in-situ data measurements concentrations between 1.6 and 33 mg L−1 taken from the superficial layer of the river in two campaigns. A large data set of empirical and referenced algorithm from literature were used to evaluate the accuracy and precision of the relationship. For estimation of TSS, a higher accuracy was achieved using the Tassan’s algorithm with the BAND X/ BANDX ratio. The correlation coefficient with R2 = X demonstrate the feasibility of use remote sensed data with consumed-grade cameras as an effective tool for a frequent monitoring and controlling of water quality parameters such as Total Suspended Solids of watersheds, these being the most vulnerable and less compliance with environmental regulations.

  16. A procalcitonin-based algorithm to guide antibiotic therapy in secondary peritonitis following emergency surgery: a prospective study with propensity score matching analysis.

    Science.gov (United States)

    Huang, Ting-Shuo; Huang, Shie-Shian; Shyu, Yu-Chiau; Lee, Chun-Hui; Jwo, Shyh-Chuan; Chen, Pei-Jer; Chen, Huang-Yang

    2014-01-01

    Procalcitonin (PCT)-based algorithms have been used to guide antibiotic therapy in several clinical settings. However, evidence supporting PCT-based algorithms for secondary peritonitis after emergency surgery is scanty. In this study, we aimed to investigate whether a PCT-based algorithm could safely reduce antibiotic exposure in this population. From April 2012 to March 2013, patients that had secondary peritonitis diagnosed at the emergency department and underwent emergency surgery were screened for eligibility. PCT levels were obtained pre-operatively, on post-operative days 1, 3, 5, and 7, and on subsequent days if needed. Antibiotics were discontinued if PCT was Advanced age, coexisting pulmonary diseases, and higher severity of illness were significantly associated with longer durations of antibiotic use. The PCT-based algorithm safely reduces antibiotic exposure in this study. Further randomized trials are needed to confirm our findings and incorporate cost-effectiveness analysis. Australian New Zealand Clinical Trials Registry ACTRN12612000601831.

  17. Prospective analysis of a first MTP total joint replacement. Evaluation by bone mineral densitometry, pedobarography, and visual analogue score for pain

    DEFF Research Database (Denmark)

    Wetke, Eva; Zerahn, Bo; Kofoed, Hakon

    2012-01-01

    We hypothesized that a total replacement of the first metatarsophalangeal joint (MTP-1) would alter the walking pattern with medialisation of the ground reaction force (GRF) of the foot and subsequently cause an increase in bone mineral density (BMD) in the medial metatarsal bones and a decline o...

  18. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation

    Directory of Open Access Journals (Sweden)

    Gang Wang

    2018-05-01

    Full Text Available As the application of a coal mine Internet of Things (IoT, mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  19. Design of Compressed Sensing Algorithm for Coal Mine IoT Moving Measurement Data Based on a Multi-Hop Network and Total Variation.

    Science.gov (United States)

    Wang, Gang; Zhao, Zhikai; Ning, Yongjie

    2018-05-28

    As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

  20. The relation of putamen nucleus 6-[18F]fluoro-L-m-tyrosine uptake to total Unified Parkinson's Disease Rating Scale scores

    International Nuclear Information System (INIS)

    Buchy, R.

    2002-01-01

    The contribution of dopaminergic deficiency in the striatum to the severity of locomotor disability in Parkinson's disease has been consistently shown with 6-[ 18 F]fluoro-L-DOPA in positron emission tomography. Recently, 6-[ 18 F]fluoro-L-m-tyrosine, an alternative tracer with similar distribution kinetics has been used to facilitate data analysis. Locomotor disability in Parkinson's disease can be measured using the Unified Parkinson's Disease Rating Scale. The Unified Parkinson's Disease Rating Scale was used in conjunction with 6-[ 18 F]fluoro-L-m-tyrosine -PET to clinically examine a group of five Parkinson's disease patients. An inverse relation similar to that previously demonstrated with 6-[ 18 F]fluoro-L-DOPA was found between the putamen nucleus 6-[ 18 F]fluoro-L-m-tyrosine influx constant and Unified Parkinson's Disease Rating Scale score. This finding suggests that like 6-[ 18 F]fluoro-L-m-tyrosine can be used to accurately measure the degree of locomotor disability caused by Parkinson's disease. (author)

  1. Category fluency test: effects of age, gender and education on total scores, clustering and switching in Brazilian Portuguese-speaking subjects

    Directory of Open Access Journals (Sweden)

    Brucki S.M.D.

    2004-01-01

    Full Text Available Verbal fluency tests are used as a measure of executive functions and language, and can also be used to evaluate semantic memory. We analyzed the influence of education, gender and age on scores in a verbal fluency test using the animal category, and on number of categories, clustering and switching. We examined 257 healthy participants (152 females and 105 males with a mean age of 49.42 years (SD = 15.75 and having a mean educational level of 5.58 (SD = 4.25 years. We asked them to name as many animals as they could. Analysis of variance was performed to determine the effect of demographic variables. No significant effect of gender was observed for any of the measures. However, age seemed to influence the number of category changes, as expected for a sensitive frontal measure, after being controlled for the effect of education. Educational level had a statistically significant effect on all measures, except for clustering. Subject performance (mean number of animals named according to schooling was: illiterates, 12.1; 1 to 4 years, 12.3; 5 to 8 years, 14.0; 9 to 11 years, 16.7, and more than 11 years, 17.8. We observed a decrease in performance in these five educational groups over time (more items recalled during the first 15 s, followed by a progressive reduction until the fourth interval. We conclude that education had the greatest effect on the category fluency test in this Brazilian sample. Therefore, we must take care in evaluating performance in lower educational subjects.

  2. Iterated greedy algorithms to minimize the total family flow time for job-shop scheduling with job families and sequence-dependent set-ups

    Science.gov (United States)

    Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho

    2017-10-01

    This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.

  3. Matching score based face recognition

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2006-01-01

    Accurate face registration is of vital importance to the performance of a face recognition algorithm. We propose a new method: matching score based face registration, which searches for optimal alignment by maximizing the matching score output of a classifier as a function of the different

  4. DEVELOPMENT OF WATER QUALITY PARAMETER RETRIEVAL ALGORITHMS FOR ESTIMATING TOTAL SUSPENDED SOLIDS AND CHLOROPHYLL-A CONCENTRATION USING LANDSAT-8 IMAGERY AT POTERAN ISLAND WATER

    Directory of Open Access Journals (Sweden)

    N. Laili

    2015-10-01

    Full Text Available The Landsat-8 satellite imagery is now highly developed compares to the former of Landsat projects. Both land and water area are possibly mapped using this satellite sensor. Considerable approaches have been made to obtain a more accurate method for extracting the information of water area from the images. It is difficult to generate an accurate water quality information from Landsat images by using some existing algorithm provided by researchers. Even though, those algorithms have been validated in some water area, but the dynamic changes and the specific characteristics of each area make it necessary to get them evaluated and validated over another water area. This paper aims to make a new algorithm by correlating the measured and estimated TSS and Chla concentration. We collected in-situ remote sensing reflectance, TSS and Chl-a concentration in 9 stations surrounding the Poteran islands as well as Landsat 8 data on the same acquisition time of April 22, 2015. The regression model for estimating TSS produced high accuracy with determination coefficient (R2, NMAE and RMSE of 0.709; 9.67 % and 1.705 g/m3 respectively. Whereas, Chla retrieval algorithm produced R2 of 0.579; NMAE of 10.40% and RMSE of 51.946 mg/m3. By implementing these algorithms to Landsat 8 image, the estimated water quality parameters over Poteran island water ranged from 9.480 to 15.801 g/m3 and 238.546 to 346.627 mg/m3 for TSS and Chl-a respectively.

  5. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  6. Achieving Minimum Clinically Important Difference in Oxford Knee Score and Short Form-36 Physical Component Summary Is Less Likely with Single-Radius Compared with Multiradius Total Knee Arthroplasty in Asians.

    Science.gov (United States)

    Lee, Wu Chean; Bin Abd Razak, Hamid Rahmatullah; Allen, John Carson; Chong, Hwei Chi; Tan, Hwee Chye Andrew

    2018-04-10

    Single-radius (SR) and multiradius (MR) total knee arthroplasties (TKAs) have produced similar outcomes, albeit most studies originate from Western nations. There are known knee kinematic differences between Western and Asian patients after TKA. The aim of this study is to compare the short-term patient-reported outcome measures (PROMs) of SR-TKA versus MR-TKA in Asians. Registry data of 133 SR-TKA versus 363 MR-TKA by a single surgeon were analyzed. Preoperative and 2-year postoperative range of motion (ROM) and PROMs were compared with Student's t -test and Mann-Whitney U-test. Logistic regression model was used to evaluate the odds of SR-TKA or MR-TKA achieving the minimum clinically important difference (MCID) of studied outcomes. Patients in both groups had similar age (65.7 ± 7.6 vs. 65.8 ± 8.2 years; p  = 0.317), gender proportion (71% females vs. 79% females; p  = 0.119), and ethnic distribution (80% Chinese vs. 84% Chinese; p  = 0.258). Preoperatively, there were no statistically significant differences between both groups for ROM, Knee Society Score (KSS), Oxford Knee Score (OKS), and Short Form (SF)-36 scores. At 2 years, all outcomes were statistically similar or failed to achieve a difference of MCID. Controlling for all preoperative variables, SR-TKA has significantly lower odds of achieving MCID for OKS (odds ratio [OR]: 0.275, 95% confidence interval [CI]: 0.114-0.663; p  = 0.004) and SF-36 Physical Component Summary (PCS) (OR: 0.547; 95% CI: 0.316-0.946; p  = 0.031) compared with MR-TKA. In conclusion, there are no significant differences in the absolute PROMs between SR-TKA and MR-TKA at 2 years following TKA in Asians. However, SR-TKA has significantly lower odds of achieving the MCID for OKS and SF-36 PCS. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  7. Effects of levan-type fructan on growth performance, nutrient digestibility, diarrhoea scores, faecal shedding of total lactic acid bacteria and coliform bacteria, and faecal gas emission in weaning pigs.

    Science.gov (United States)

    Lei, Xin Jian; Kim, Yong Min; Park, Jae Hong; Baek, Dong Heon; Nyachoti, Charles Martin; Kim, In Ho

    2018-03-01

    The use of antibiotics as growth promoters in feed has been fully or partially banned in several countries. The objective of this study was to evaluate effects of levan-type fructan on growth performance, nutrient digestibility, faecal shedding of lactic acid bacteria and coliform bacteria, diarrhoea scores, and faecal gas emission in weaning pigs. A total of 144 weaning pigs [(Yorkshire × Landrace) × Duroc] were randomly allocated to four diets: corn-soybean meal-based diets supplemented with 0, 0.1, 0.5, or 1.0 g kg -1 levan-type fructan during this 42-day experiment. During days 0 to 21 and 0 to 42, average daily gain and average daily feed intake were linearly increased (P bacteria counts were linearly increased (P = 0.001). The results indicate that dietary supplementation with increasing levan-type fructan enhanced growth performance, improved nutrient digestibility, and increased faecal lactic acid bacteria counts in weaning pigs linearly. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  8. Semiparametric score level fusion: Gaussian copula approach

    NARCIS (Netherlands)

    Susyanyo, N.; Klaassen, C.A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    Score level fusion is an appealing method for combining multi-algorithms, multi- representations, and multi-modality biometrics due to its simplicity. Often, scores are assumed to be independent, but even for dependent scores, accord- ing to the Neyman-Pearson lemma, the likelihood ratio is the

  9. Evaluation of a semi-automated computer algorithm for measuring total fat and visceral fat content in lambs undergoing in vivo whole body computed tomography.

    Science.gov (United States)

    Rosenblatt, Alana J; Scrivani, Peter V; Boisclair, Yves R; Reeves, Anthony P; Ramos-Nieves, Jose M; Xie, Yiting; Erb, Hollis N

    2017-10-01

    Computed tomography (CT) is a suitable tool for measuring body fat, since it is non-destructive and can be used to differentiate metabolically active visceral fat from total body fat. Whole body analysis of body fat is likely to be more accurate than single CT slice estimates of body fat. The aim of this study was to assess the agreement between semi-automated computer analysis of whole body volumetric CT data and conventional proximate (chemical) analysis of body fat in lambs. Data were collected prospectively from 12 lambs that underwent duplicate whole body CT, followed by slaughter and carcass analysis by dissection and chemical analysis. Agreement between methods for quantification of total and visceral fat was assessed by Bland-Altman plot analysis. The repeatability of CT was assessed for these measures using the mean difference of duplicated measures. When compared to chemical analysis, CT systematically underestimated total and visceral fat contents by more than 10% of the mean fat weight. Therefore, carcass analysis and semi-automated CT computer measurements were not interchangeable for quantifying body fat content without the use of a correction factor. CT acquisition was repeatable, with a mean difference of repeated measures being close to zero. Therefore, uncorrected whole body CT might have an application for assessment of relative changes in fat content, especially in growing lambs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Computer-Assisted Automated Scoring of Polysomnograms Using the Somnolyzer System.

    Science.gov (United States)

    Punjabi, Naresh M; Shifa, Naima; Dorffner, Georg; Patil, Susheel; Pien, Grace; Aurora, Rashmi N

    2015-10-01

    Manual scoring of polysomnograms is a time-consuming and tedious process. To expedite the scoring of polysomnograms, several computerized algorithms for automated scoring have been developed. The overarching goal of this study was to determine the validity of the Somnolyzer system, an automated system for scoring polysomnograms. The analysis sample comprised of 97 sleep studies. Each polysomnogram was manually scored by certified technologists from four sleep laboratories and concurrently subjected to automated scoring by the Somnolyzer system. Agreement between manual and automated scoring was examined. Sleep staging and scoring of disordered breathing events was conducted using the 2007 American Academy of Sleep Medicine criteria. Clinical sleep laboratories. A high degree of agreement was noted between manual and automated scoring of the apnea-hypopnea index (AHI). The average correlation between the manually scored AHI across the four clinical sites was 0.92 (95% confidence interval: 0.90-0.93). Similarly, the average correlation between the manual and Somnolyzer-scored AHI values was 0.93 (95% confidence interval: 0.91-0.96). Thus, interscorer correlation between the manually scored results was no different than that derived from manual and automated scoring. Substantial concordance in the arousal index, total sleep time, and sleep efficiency between manual and automated scoring was also observed. In contrast, differences were noted between manually and automated scored percentages of sleep stages N1, N2, and N3. Automated analysis of polysomnograms using the Somnolyzer system provides results that are comparable to manual scoring for commonly used metrics in sleep medicine. Although differences exist between manual versus automated scoring for specific sleep stages, the level of agreement between manual and automated scoring is not significantly different than that between any two human scorers. In light of the burden associated with manual scoring, automated

  11. Recent Advancements in Lightning Jump Algorithm Work

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  12. Total mass difference statistics algorithm: a new approach to identification of high-mass building blocks in electrospray ionization Fourier transform ion cyclotron mass spectrometry data of natural organic matter.

    Science.gov (United States)

    Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N

    2009-12-15

    The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was

  13. Differences of wells scores accuracy, caprini scores and padua scores in deep vein thrombosis diagnosis

    Science.gov (United States)

    Gatot, D.; Mardia, A. I.

    2018-03-01

    Deep Vein Thrombosis (DVT) is the venous thrombus in lower limbs. Diagnosis is by using venography or ultrasound compression. However, these examinations are not available yet in some health facilities. Therefore many scoring systems are developed for the diagnosis of DVT. The scoring method is practical and safe to use in addition to efficacy, and effectiveness in terms of treatment and costs. The existing scoring systems are wells, caprini and padua score. There have been many studies comparing the accuracy of this score but not in Medan. Therefore, we are interested in comparative research of wells, capriniand padua score in Medan.An observational, analytical, case-control study was conducted to perform diagnostic tests on the wells, caprini and padua score to predict the risk of DVT. The study was at H. Adam Malik Hospital in Medan.From a total of 72 subjects, 39 people (54.2%) are men and the mean age are 53.14 years. Wells score, caprini score and padua score has a sensitivity of 80.6%; 61.1%, 50% respectively; specificity of 80.65; 66.7%; 75% respectively, and accuracy of 87.5%; 64.3%; 65.7% respectively.Wells score has better sensitivity, specificity and accuracy than caprini and padua score in diagnosing DVT.

  14. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  15. Android Malware Classification Using K-Means Clustering Algorithm

    Science.gov (United States)

    Hamid, Isredza Rahmi A.; Syafiqah Khalid, Nur; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Chai Wen, Chuah

    2017-08-01

    Malware was designed to gain access or damage a computer system without user notice. Besides, attacker exploits malware to commit crime or fraud. This paper proposed Android malware classification approach based on K-Means clustering algorithm. We evaluate the proposed model in terms of accuracy using machine learning algorithms. Two datasets were selected to demonstrate the practicing of K-Means clustering algorithms that are Virus Total and Malgenome dataset. We classify the Android malware into three clusters which are ransomware, scareware and goodware. Nine features were considered for each types of dataset such as Lock Detected, Text Detected, Text Score, Encryption Detected, Threat, Porn, Law, Copyright and Moneypak. We used IBM SPSS Statistic software for data classification and WEKA tools to evaluate the built cluster. The proposed K-Means clustering algorithm shows promising result with high accuracy when tested using Random Forest algorithm.

  16. Calculation of electromagnetic fields in electric machines by means of the finite element. Algorithms for the solution of problems with known total densities. Pt. 2; Calculo de campos electromagneticos en maquinas electricas mediante elemento finito. Algoritmos para la solucion de problemas con densidades totales conocidas. Pt. 2

    Energy Technology Data Exchange (ETDEWEB)

    Rosales, Mauricio F [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1988-12-31

    This article is based in the electromagnetic modeling presented in the first part. Here are only considered the magnetic systems or electric systems in closed regions with moving or axial symmetry, whose total current density or total electric load density is known. The algorithms that have been implanted in the software CLIIE-2D of the Instituto de Investigaciones Electricas (IIE) are developed in order to obtain numerical solutions for these problems. The basic systems of algebraic equations are obtained by means of the application of the Galerkin method in the discreteness of the finite element with first order triangular elements. [Espanol] Este articulo se basa en la modelacion electromagnetica presentada en la primera parte. Aqui solo se consideran sistemas magneticos o sistemas electricos en regiones cerradas con simetria translacional o axial, cuya densidad de corriente total o densidad de carga electrica total es conocida. Se desarrollan los algoritmos que se han implantado en el programa de computo CLIIE-2D, del Instituto de Investigaciones Electricas (IIE) con el fin de obtener soluciones numericas para estos problemas. Los sistemas basicos de ecuaciones algebraicas se obtienen mediante la aplicacion del metodo de Galerkin en la discretizacion de elemento finito con elementos triangulares de primer orden.

  17. Calculation of electromagnetic fields in electric machines by means of the finite element. Algorithms for the solution of problems with known total densities. Pt. 2; Calculo de campos electromagneticos en maquinas electricas mediante elemento finito. Algoritmos para la solucion de problemas con densidades totales conocidas. Pt. 2

    Energy Technology Data Exchange (ETDEWEB)

    Rosales, Mauricio F. [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1987-12-31

    This article is based in the electromagnetic modeling presented in the first part. Here are only considered the magnetic systems or electric systems in closed regions with moving or axial symmetry, whose total current density or total electric load density is known. The algorithms that have been implanted in the software CLIIE-2D of the Instituto de Investigaciones Electricas (IIE) are developed in order to obtain numerical solutions for these problems. The basic systems of algebraic equations are obtained by means of the application of the Galerkin method in the discreteness of the finite element with first order triangular elements. [Espanol] Este articulo se basa en la modelacion electromagnetica presentada en la primera parte. Aqui solo se consideran sistemas magneticos o sistemas electricos en regiones cerradas con simetria translacional o axial, cuya densidad de corriente total o densidad de carga electrica total es conocida. Se desarrollan los algoritmos que se han implantado en el programa de computo CLIIE-2D, del Instituto de Investigaciones Electricas (IIE) con el fin de obtener soluciones numericas para estos problemas. Los sistemas basicos de ecuaciones algebraicas se obtienen mediante la aplicacion del metodo de Galerkin en la discretizacion de elemento finito con elementos triangulares de primer orden.

  18. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  19. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  20. Allegheny County Walk Scores

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Walk Score measures the walkability of any address using a patented system developed by the Walk Score company. For each 2010 Census Tract centroid, Walk Score...

  1. Perioperative and mid-term oncologic outcomes of robotic assisted radical cystectomy with totally intracorporeal neobladder: Results of a propensity score matched comparison with open cohort from a single-centre series.

    Science.gov (United States)

    Simone, Giuseppe; Tuderti, Gabriele; Misuraca, Leonardo; Anceschi, Umberto; Ferriero, Mariaconsiglia; Minisola, Francesco; Guaglianone, Salvatore; Gallucci, Michele

    2018-04-17

    In this study, we compared perioperative and oncologic outcomes of patients treated with either open or robot-assisted radical cystectomy and intracorporeal neobladder at a tertiary care center. The institutional prospective bladder cancer database was queried for "cystectomy with curative intent" and "neobladder". All patients underwent robot-assisted radical cystectomy and intracorporeal neobladder or open radical cystectomy and orthotopic neobladder for high-grade non-muscle invasive bladder cancer or muscle invasive bladder cancer with a follow-up length ≥2 years were included. A 1:1 propensity score matching analysis was used. Kaplan-Meier method was performed to compare oncologic outcomes of selected cohorts. Survival rates were computed at 1,2,3 and 4 years after surgery and the log rank test was applied to assess statistical significance between the matched groups. Overall, 363 patients (299 open and 64 robotic) were included. Open radical cystectomy patients were more frequently male (p = 0.08), with higher pT stages (p = 0.003), lower incidence of urothelial histologies (p = 0.05) and lesser adoption of neoadjuvant chemotherapy (open radical cystectomy cases (all p ≥ 0.22). Open cohort showed a higher rate of perioperative overall complications (91.3% vs 42.2%, p 0.001). At Kaplan-Meier analysis robotic and open cohorts displayed comparable disease-free survival (log-rank p = 0.746), cancer-specific survival (p = 0.753) and overall-survival rates (p = 0.909). Robot-assisted radical cystectomy and intracorporeal neobladder provides comparable oncologic outcomes of open radical cystectomy and orthotopic neobladder at intermediate term survival analysis. Copyright © 2018 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  2. Effects of memantine on cognition in patients with moderate to severe Alzheimer's disease: post-hoc analyses of ADAS-cog and SIB total and single-item scores from six randomized, double-blind, placebo-controlled studies.

    Science.gov (United States)

    Mecocci, Patrizia; Bladström, Anna; Stender, Karina

    2009-05-01

    The post-hoc analyses reported here evaluate the specific effects of memantine treatment on ADAS-cog single-items or SIB subscales for patients with moderate to severe AD. Data from six multicentre, randomised, placebo-controlled, parallel-group, double-blind, 6-month studies were used as the basis for these post-hoc analyses. All patients with a Mini-Mental State Examination (MMSE) score of less than 20 were included. Analyses of patients with moderate AD (MMSE: 10-19), evaluated with the Alzheimer's disease Assessment Scale (ADAS-cog) and analyses of patients with moderate to severe AD (MMSE: 3-14), evaluated using the Severe Impairment Battery (SIB), were performed separately. The mean change from baseline showed a significant benefit of memantine treatment on both the ADAS-cog (p ADAS-cog single-item analyses showed significant benefits of memantine treatment, compared to placebo, for mean change from baseline for commands (p < 0.001), ideational praxis (p < 0.05), orientation (p < 0.01), comprehension (p < 0.05), and remembering test instructions (p < 0.05) for observed cases (OC). The SIB subscale analyses showed significant benefits of memantine, compared to placebo, for mean change from baseline for language (p < 0.05), memory (p < 0.05), orientation (p < 0.01), praxis (p < 0.001), and visuospatial ability (p < 0.01) for OC. Memantine shows significant benefits on overall cognitive abilities as well as on specific key cognitive domains for patients with moderate to severe AD. (c) 2009 John Wiley & Sons, Ltd.

  3. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Science.gov (United States)

    Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.

    2017-01-01

    Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p health behaviors and outcomes. PMID:28282878

  4. Adaptive testing with equated number-correct scoring

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1999-01-01

    A constrained CAT algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived

  5. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  6. Total protein

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003483.htm Total protein To use the sharing features on this page, please enable JavaScript. The total protein test measures the total amount of two classes ...

  7. WebScore: An Effective Page Scoring Approach for Uncertain Web Social Networks

    Directory of Open Access Journals (Sweden)

    Shaojie Qiao

    2011-10-01

    Full Text Available To effectively score pages with uncertainty in web social networks, we first proposed a new concept called transition probability matrix and formally defined the uncertainty in web social networks. Second, we proposed a hybrid page scoring algorithm, called WebScore, based on the PageRank algorithm and three centrality measures including degree, betweenness, and closeness. Particularly,WebScore takes into a full consideration of the uncertainty of web social networks by computing the transition probability from one page to another. The basic idea ofWebScore is to: (1 integrate uncertainty into PageRank in order to accurately rank pages, and (2 apply the centrality measures to calculate the importance of pages in web social networks. In order to verify the performance of WebScore, we developed a web social network analysis system which can partition web pages into distinct groups and score them in an effective fashion. Finally, we conducted extensive experiments on real data and the results show that WebScore is effective at scoring uncertain pages with less time deficiency than PageRank and centrality measures based page scoring algorithms.

  8. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  9. An Objective Fluctuation Score for Parkinson's Disease

    Science.gov (United States)

    Horne, Malcolm K.; McGregor, Sarah; Bergquist, Filip

    2015-01-01

    Introduction Establishing the presence and severity of fluctuations is important in managing Parkinson’s Disease yet there is no reliable, objective means of doing this. In this study we have evaluated a Fluctuation Score derived from variations in dyskinesia and bradykinesia scores produced by an accelerometry based system. Methods The Fluctuation Score was produced by summing the interquartile range of bradykinesia scores and dyskinesia scores produced every 2 minutes between 0900-1800 for at least 6 days by the accelerometry based system and expressing it as an algorithm. Results This Score could distinguish between fluctuating and non-fluctuating patients with high sensitivity and selectivity and was significant lower following activation of deep brain stimulators. The scores following deep brain stimulation lay in a band just above the score separating fluctuators from non-fluctuators, suggesting a range representing adequate motor control. When compared with control subjects the score of newly diagnosed patients show a loss of fluctuation with onset of PD. The score was calculated in subjects whose duration of disease was known and this showed that newly diagnosed patients soon develop higher scores which either fall under or within the range representing adequate motor control or instead go on to develop more severe fluctuations. Conclusion The Fluctuation Score described here promises to be a useful tool for identifying patients whose fluctuations are progressing and may require therapeutic changes. It also shows promise as a useful research tool. Further studies are required to more accurately identify therapeutic targets and ranges. PMID:25928634

  10. Lower bounds to the reliabilities of factor score estimators

    NARCIS (Netherlands)

    Hessen, D.J.

    2017-01-01

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score

  11. The Zhongshan Score

    Science.gov (United States)

    Zhou, Lin; Guo, Jianming; Wang, Hang; Wang, Guomin

    2015-01-01

    Abstract In the zero ischemia era of nephron-sparing surgery (NSS), a new anatomic classification system (ACS) is needed to adjust to these new surgical techniques. We devised a novel and simple ACS, and compared it with the RENAL and PADUA scores to predict the risk of NSS outcomes. We retrospectively evaluated 789 patients who underwent NSS with available imaging between January 2007 and July 2014. Demographic and clinical data were assessed. The Zhongshan (ZS) score consisted of three parameters. RENAL, PADUA, and ZS scores are divided into three groups, that is, high, moderate, and low scores. For operative time (OT), significant differences were seen between any two groups of ZS score and PADUA score (all P RENAL showed no significant difference between moderate and high complexity in OT, WIT, estimated blood loss, and increase in SCr. Compared with patients with a low score of ZS, those with a high or moderate score had 8.1-fold or 3.3-fold higher risk of surgical complications, respectively (all P RENAL score, patients with a high or moderate score had 5.7-fold or 1.9-fold higher risk of surgical complications, respectively (all P RENAL and PADUA scores. ZS score could be used to reflect the surgical complexity and predict the risk of surgical complications in patients undergoing NSS. PMID:25654399

  12. Combination of scoring schemes for protein docking

    Directory of Open Access Journals (Sweden)

    Schomburg Dietmar

    2007-08-01

    Full Text Available Abstract Background Docking algorithms are developed to predict in which orientation two proteins are likely to bind under natural conditions. The currently used methods usually consist of a sampling step followed by a scoring step. We developed a weighted geometric correlation based on optimised atom specific weighting factors and combined them with our previously published amino acid specific scoring and with a comprehensive SVM-based scoring function. Results The scoring with the atom specific weighting factors yields better results than the amino acid specific scoring. In combination with SVM-based scoring functions the percentage of complexes for which a near native structure can be predicted within the top 100 ranks increased from 14% with the geometric scoring to 54% with the combination of all scoring functions. Especially for the enzyme-inhibitor complexes the results of the ranking are excellent. For half of these complexes a near-native structure can be predicted within the first 10 proposed structures and for more than 86% of all enzyme-inhibitor complexes within the first 50 predicted structures. Conclusion We were able to develop a combination of different scoring schemes which considers a series of previously described and some new scoring criteria yielding a remarkable improvement of prediction quality.

  13. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    Science.gov (United States)

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20

  14. Totally James

    Science.gov (United States)

    Owens, Tom

    2006-01-01

    This article presents an interview with James Howe, author of "The Misfits" and "Totally Joe". In this interview, Howe discusses tolerance, diversity and the parallels between his own life and his literature. Howe's four books in addition to "The Misfits" and "Totally Joe" and his list of recommended books with lesbian, gay, bisexual, transgender,…

  15. How to score questionnaires

    NARCIS (Netherlands)

    Hofstee, W.K.B.; Ten Berge, J.M.F.; Hendriks, A.A.J.

    The standard practice in scoring questionnaires consists of adding item scores and standardizing these sums. We present a set of alternative procedures, consisting of (a) correcting for the acquiescence variance that disturbs the structure of the questionnaire; (b) establishing item weights through

  16. SCORE - A DESCRIPTION.

    Science.gov (United States)

    SLACK, CHARLES W.

    REINFORCEMENT AND ROLE-REVERSAL TECHNIQUES ARE USED IN THE SCORE PROJECT, A LOW-COST PROGRAM OF DELINQUENCY PREVENTION FOR HARD-CORE TEENAGE STREET CORNER BOYS. COMMITTED TO THE BELIEF THAT THE BOYS HAVE THE POTENTIAL FOR ETHICAL BEHAVIOR, THE SCORE WORKER FOLLOWS B.F. SKINNER'S THEORY OF OPERANT CONDITIONING AND REINFORCES THE DELINQUENT'S GOOD…

  17. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Directory of Open Access Journals (Sweden)

    Joel Adu-Brimpong

    2017-03-01

    Full Text Available Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist, a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783 participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions. Twelve street segments per home address were assessed for (1 Land-Use Type; (2 Public Transportation Availability; (3 Street Characteristics; (4 Environment Quality and (5 Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9 and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6. Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3. Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001. This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.

  18. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  19. Calcium scoring with dual-energy CT in men and women: an anthropomorphic phantom study

    Science.gov (United States)

    Li, Qin; Liu, Songtao; Myers, Kyle; Gavrielides, Marios A.; Zeng, Rongping; Sahiner, Berkman; Petrick, Nicholas

    2016-03-01

    This work aimed to quantify and compare the potential impact of gender differences on coronary artery calcium scoring with dual-energy CT. An anthropomorphic thorax phantom with four synthetic heart vessels (diameter 3-4.5 mm: female/male left main and left circumflex artery) were scanned with and without female breast plates. Ten repeat scans were acquired in both single- and dual-energy modes and reconstructed at six reconstruction settings: two slice thicknesses (3 mm, 0.6 mm) and three reconstruction algorithms (FBP, IR3, IR5). Agatston and calcium volume scores were estimated from the reconstructed data using a segmentation-based approach. Total calcium score (summation of four vessels), and male/female calcium scores (summation of male/female vessels scanned in phantom without/with breast plates) were calculated accordingly. Both Agatston and calcium volume scores were found comparable between single- and dual-energy scans (Pearson r= 0.99, pwomen and men in calcium scoring, and for standardizing imaging protocols for improved gender-specific calcium scoring.

  20. The Bandim tuberculosis score

    DEFF Research Database (Denmark)

    Rudolf, Frauke; Joaquim, Luis Carlos; Vieira, Cesaltina

    2013-01-01

    Background: This study was carried out in Guinea-Bissau ’ s capital Bissau among inpatients and outpatients attending for tuberculosis (TB) treatment within the study area of the Bandim Health Project, a Health and Demographic Surveillance Site. Our aim was to assess the variability between 2...... physicians in performing the Bandim tuberculosis score (TBscore), a clinical severity score for pulmonary TB (PTB), and to compare it to the Karnofsky performance score (KPS). Method : From December 2008 to July 2009 we assessed the TBscore and the KPS of 100 PTB patients at inclusion in the TB cohort and...

  1. Heart valve surgery: EuroSCORE vs. EuroSCORE II vs. Society of Thoracic Surgeons score

    Directory of Open Access Journals (Sweden)

    Muhammad Sharoz Rabbani

    2014-12-01

    Full Text Available Background This is a validation study comparing the European System for Cardiac Operative Risk Evaluation (EuroSCORE II with the previous additive (AES and logistic EuroSCORE (LES and the Society of Thoracic Surgeons’ (STS risk prediction algorithm, for patients undergoing valve replacement with or without bypass in Pakistan. Patients and Methods Clinical data of 576 patients undergoing valve replacement surgery between 2006 and 2013 were retrospectively collected and individual expected risks of death were calculated by all four risk prediction algorithms. Performance of these risk algorithms was evaluated in terms of discrimination and calibration. Results There were 28 deaths (4.8% among 576 patients, which was lower than the predicted mortality of 5.16%, 6.96% and 4.94% by AES, LES and EuroSCORE II but was higher than 2.13% predicted by STS scoring system. For single and double valve replacement procedures, EuroSCORE II was the best predictor of mortality with highest Hosmer and Lemmeshow test (H-L p value (0.346 to 0.689 and area under the receiver operating characteristic (ROC curve (0.637 to 0.898. For valve plus concomitant coronary artery bypass grafting (CABG patients actual mortality was 1.88%. STS calculator came out to be the best predictor of mortality for this subgroup with H-L p value (0.480 to 0.884 and ROC (0.657 to 0.775. Conclusions For Pakistani population EuroSCORE II is an accurate predictor for individual operative risk in patients undergoing isolated valve surgery, whereas STS performs better in the valve plus CABG group.

  2. Reverse-total shoulder arthroplasty cost-effectiveness: A quality-adjusted life years comparison with total hip arthroplasty.

    Science.gov (United States)

    Bachman, Daniel; Nyland, John; Krupp, Ryan

    2016-02-18

    To compare reverse-total shoulder arthroplasty (RSA) cost-effectiveness with total hip arthroplasty cost-effectiveness. This study used a stochastic model and decision-making algorithm to compare the cost-effectiveness of RSA and total hip arthroplasty. Fifteen patients underwent pre-operative, and 3, 6, and 12 mo post-operative clinical examinations and Short Form-36 Health Survey completion. Short form-36 Health Survey subscale scores were converted to EuroQual Group Five Dimension Health Outcome scores and compared with historical data from age-matched patients who had undergone total hip arthroplasty. Quality-adjusted life year (QALY) improvements based on life expectancies were calculated. The cost/QALY was $3900 for total hip arthroplasty and $11100 for RSA. After adjusting the model to only include shoulder-specific physical function subscale items, the RSA QALY improved to 2.8 years, and its cost/QALY decreased to $8100. Based on industry accepted standards, cost/QALY estimates supported both RSA and total hip arthroplasty cost-effectiveness. Although total hip arthroplasty remains the quality of life improvement "gold standard" among arthroplasty procedures, cost/QALY estimates identified in this study support the growing use of RSA to improve patient quality of life.

  3. Volleyball Scoring Systems.

    Science.gov (United States)

    Calhoun, William; Dargahi-Noubary, G. R.; Shi, Yixun

    2002-01-01

    The widespread interest in sports in our culture provides an excellent opportunity to catch students' attention in mathematics and statistics classes. One mathematically interesting aspect of volleyball, which can be used to motivate students, is the scoring system. (MM)

  4. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  5. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  6. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  7. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  8. Instant MuseScore

    CERN Document Server

    Shinn, Maxwell

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. Instant MuseScore is written in an easy-to follow format, packed with illustrations that will help you get started with this music composition software.This book is for musicians who would like to learn how to notate music digitally with MuseScore. Readers should already have some knowledge about musical terminology; however, no prior experience with music notation software is necessary.

  9. A diagnostic scoring system for myxedema coma.

    Science.gov (United States)

    Popoveniuc, Geanina; Chandra, Tanu; Sud, Anchal; Sharma, Meeta; Blackman, Marc R; Burman, Kenneth D; Mete, Mihriye; Desale, Sameer; Wartofsky, Leonard

    2014-08-01

    To develop diagnostic criteria for myxedema coma (MC), a decompensated state of extreme hypothyroidism with a high mortality rate if untreated, in order to facilitate its early recognition and treatment. The frequencies of characteristics associated with MC were assessed retrospectively in patients from our institutions in order to derive a semiquantitative diagnostic point scale that was further applied on selected patients whose data were retrieved from the literature. Logistic regression analysis was used to test the predictive power of the score. Receiver operating characteristic (ROC) curve analysis was performed to test the discriminative power of the score. Of the 21 patients examined, 7 were reclassified as not having MC (non-MC), and they were used as controls. The scoring system included a composite of alterations of thermoregulatory, central nervous, cardiovascular, gastrointestinal, and metabolic systems, and presence or absence of a precipitating event. All 14 of our MC patients had a score of ≥60, whereas 6 of 7 non-MC patients had scores of 25 to 50. A total of 16 of 22 MC patients whose data were retrieved from the literature had a score ≥60, and 6 of 22 of these patients scored between 45 and 55. The odds ratio per each score unit increase as a continuum was 1.09 (95% confidence interval [CI], 1.01 to 1.16; P = .019); a score of 60 identified coma, with an odds ratio of 1.22. The area under the ROC curve was 0.88 (95% CI, 0.65 to 1.00), and the score of 60 had 100% sensitivity and 85.71% specificity. A score ≥60 in the proposed scoring system is potentially diagnostic for MC, whereas scores between 45 and 59 could classify patients at risk for MC.

  10. Total Thyroidectomy

    Directory of Open Access Journals (Sweden)

    Lopez Moris E

    2016-06-01

    Full Text Available Total thyroidectomy is a surgery that removes all the thyroid tissue from the patient. The suspect of cancer in a thyroid nodule is the most frequent indication and it is presume when previous fine needle puncture is positive or a goiter has significant volume increase or symptomes. Less frequent indications are hyperthyroidism when it is refractory to treatment with Iodine 131 or it is contraindicated, and in cases of symptomatic thyroiditis. The thyroid gland has an important anatomic relation whith the inferior laryngeal nerve and the parathyroid glands, for this reason it is imperative to perform extremely meticulous dissection to recognize each one of these elements and ensure their preservation. It is also essential to maintain strict hemostasis, in order to avoid any postoperative bleeding that could lead to a suffocating neck hematoma, feared complication that represents a surgical emergency and endangers the patient’s life.It is essential to run a formal technique, without skipping steps, and maintain prudence and patience that should rule any surgical act.

  11. Comparison of five actigraphy scoring methods with bipolar disorder.

    Science.gov (United States)

    Boudebesse, Carole; Leboyer, Marion; Begley, Amy; Wood, Annette; Miewald, Jean; Hall, Martica; Frank, Ellen; Kupfer, David; Germain, Anne

    2013-01-01

    The goal of this study was to compare 5 actigraphy scoring methods in a sample of 18 remitted patients with bipolar disorder. Actigraphy records were processed using five different scoring methods relying on the sleep diary; the event-marker; the software-provided automatic algorithm; the automatic algorithm supplemented by the event-marker; visual inspection (VI) only. The algorithm and the VI methods differed from the other methods for many actigraphy parameters of interest. Particularly, the algorithm method yielded longer sleep duration, and the VI method yielded shorter sleep latency compared to the other methods. The present findings provide guidance for the selection of signal processing method based on sleep parameters of interest, time-cue sources and availability, and related scoring time costs for the study.

  12. The lod score method.

    Science.gov (United States)

    Rice, J P; Saccone, N L; Corbett, J

    2001-01-01

    The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  13. The Bayesian Score Statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.; Kleijn, R.; Paap, R.

    2000-01-01

    We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike

  14. South African Scoring System

    African Journals Online (AJOL)

    2014-11-18

    Nov 18, 2014 ... for 80% (SASS score) and 75% (NOT) of the variation in the regression model. Consequently, SASS ... further investigation: spatial analyses of macroinvertebrate assemblages; and the use of structural and functional metrics. Keywords: .... conductivity levels was assessed using multiple linear regres- sion.

  15. Methods and statistics for combining motif match scores.

    Science.gov (United States)

    Bailey, T L; Gribskov, M

    1998-01-01

    Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.

  16. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  17. Your Criminal Fico Score

    Science.gov (United States)

    2016-09-01

    Amendment jurisprudence . First, more information will be collected on individuals in police databases, and those individuals may not receive notice of...66 “Preconference Call for Papers: Algorithms, Automation and Politics” International Communication Association, accessed Aug. 2, 2016, http...moments. 82 Stephanie K. Pell, “Systematic Government Access to Private-Sector Data in the United States,” International Data Privacy Law 2 (2012): 245

  18. Prospective validation of a near real-time EHR-integrated automated SOFA score calculator.

    Science.gov (United States)

    Aakre, Christopher; Franco, Pablo Moreno; Ferreyra, Micaela; Kitson, Jaben; Li, Man; Herasevich, Vitaly

    2017-07-01

    We created an algorithm for automated Sequential Organ Failure Assessment (SOFA) score calculation within the Electronic Health Record (EHR) to facilitate detection of sepsis based on the Third International Consensus Definitions for Sepsis and Septic Shock (SEPSIS-3) clinical definition. We evaluated the accuracy of near real-time and daily automated SOFA score calculation compared with manual score calculation. Automated SOFA scoring computer programs were developed using available EHR data sources and integrated into a critical care focused patient care dashboard at Mayo Clinic in Rochester, Minnesota. We prospectively compared the accuracy of automated versus manual calculation for a sample of patients admitted to the medical intensive care unit at Mayo Clinic Hospitals in Rochester, Minnesota and Jacksonville, Florida. Agreement was calculated with Cohen's kappa statistic. Reason for discrepancy was tabulated during manual review. Random spot check comparisons were performed 134 times on 27 unique patients, and daily SOFA score comparisons were performed for 215 patients over a total of 1206 patient days. Agreement between automatically scored and manually scored SOFA components for both random spot checks (696 pairs, κ=0.89) and daily calculation (5972 pairs, κ=0.89) was high. The most common discrepancies were in the respiratory component (inaccurate fraction of inspired oxygen retrieval; 200/1206) and creatinine (normal creatinine in patients with no urine output on dialysis; 128/1094). 147 patients were at risk of developing sepsis after intensive care unit admission, 10 later developed sepsis confirmed by chart review. All were identified before onset of sepsis with the ΔSOFA≥2 point criterion and 46 patients were false-positives. Near real-time automated SOFA scoring was found to have strong agreement with manual score calculation and may be useful for the detection of sepsis utilizing the new SEPSIS-3 definition. Copyright © 2017 Elsevier B.V. All

  19. Automatic Sleep Scoring in Normals and in Individuals with Neurodegenerative Disorders According to New International Sleep Scoring Criteria

    DEFF Research Database (Denmark)

    Jensen, Peter S.; Sørensen, Helge Bjarup Dissing; Leonthin, Helle

    2010-01-01

    The aim of this study was to develop a fully automatic sleep scoring algorithm on the basis of a reproduction of new international sleep scoring criteria from the American Academy of Sleep Medicine. A biomedical signal processing algorithm was developed, allowing for automatic sleep depth....... Based on an observed reliability of the manual scorer of 92.5% (Cohen's Kappa: 0.87) in the normal group and 85.3% (Cohen's Kappa: 0.73) in the abnormal group, this study concluded that although the developed algorithm was capable of scoring normal sleep with an accuracy around the manual interscorer...... reliability, it failed in accurately scoring abnormal sleep as encountered for the Parkinson disease/multiple system atrophy patients....

  20. Automatic sleep scoring in normals and in individuals with neurodegenerative disorders according to new international sleep scoring criteria

    DEFF Research Database (Denmark)

    Jensen, Peter S.; Sørensen, Helge Bjarup Dissing; Jennum, P. J.

    2010-01-01

    Medicine (AASM). Methods: A biomedical signal processing algorithm was developed, allowing for automatic sleep depth quantification of routine polysomnographic (PSG) recordings through feature extraction, supervised probabilistic Bayesian classification, and heuristic rule-based smoothing. The performance......Introduction: Reliable polysomnographic classification is the basis for evaluation of sleep disorders in neurological diseases. Aim: To develop a fully automatic sleep scoring algorithm on the basis of a reproduction of new international sleep scoring criteria from the American Academy of Sleep....... Conclusion: The developed algorithm was capable of scoring normal sleep with an accuracy around the manual inter-scorer reliability, it failed in accurately scoring abnormal sleep as encountered for the PD/MSA patients, which is due to the abnormal micro- and macrostructure pattern in these patients....

  1. Automatic sleep scoring in normals and in individuals with neurodegenerative disorders according to new international sleep scoring criteria

    DEFF Research Database (Denmark)

    Jensen, Peter S; Sorensen, Helge B D; Jennum, Poul

    2010-01-01

    The aim of this study was to develop a fully automatic sleep scoring algorithm on the basis of a reproduction of new international sleep scoring criteria from the American Academy of Sleep Medicine. A biomedical signal processing algorithm was developed, allowing for automatic sleep depth....... Based on an observed reliability of the manual scorer of 92.5% (Cohen's Kappa: 0.87) in the normal group and 85.3% (Cohen's Kappa: 0.73) in the abnormal group, this study concluded that although the developed algorithm was capable of scoring normal sleep with an accuracy around the manual interscorer...... reliability, it failed in accurately scoring abnormal sleep as encountered for the Parkinson disease/multiple system atrophy patients....

  2. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  3. Credit scoring methods

    Czech Academy of Sciences Publication Activity Database

    Vojtek, Martin; Kočenda, Evžen

    2006-01-01

    Roč. 56, 3-4 (2006), s. 152-167 ISSN 0015-1920 R&D Projects: GA ČR GA402/05/0931 Institutional research plan: CEZ:AV0Z70850503 Keywords : banking sector * credit scoring * discrimination analysis Subject RIV: AH - Economics Impact factor: 0.190, year: 2006 http://journal.fsv.cuni.cz/storage/1050_s_152_167.pdf

  4. Credit scoring for individuals

    Directory of Open Access Journals (Sweden)

    Maria DIMITRIU

    2010-12-01

    Full Text Available Lending money to different borrowers is profitable, but risky. The profits come from the interest rate and the fees earned on the loans. Banks do not want to make loans to borrowers who cannot repay them. Even if the banks do not intend to make bad loans, over time, some of them can become bad. For instance, as a result of the recent financial crisis, the capability of many borrowers to repay their loans were affected, many of them being on default. That’s why is important for the bank to monitor the loans. The purpose of this paper is to focus on credit scoring main issues. As a consequence of this, we presented in this paper the scoring model of an important Romanian Bank. Based on this credit scoring model and taking into account the last lending requirements of the National Bank of Romania, we developed an assessment tool, in Excel, for retail loans which is presented in the case study.

  5. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  6. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  7. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    Science.gov (United States)

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-12-01

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0

  8. College Math Assessment: SAT Scores vs. College Math Placement Scores

    Science.gov (United States)

    Foley-Peres, Kathleen; Poirier, Dawn

    2008-01-01

    Many colleges and university's use SAT math scores or math placement tests to place students in the appropriate math course. This study compares the use of math placement scores and SAT scores for 188 freshman students. The student's grades and faculty observations were analyzed to determine if the SAT scores and/or college math assessment scores…

  9. Estimating NHL Scoring Rates

    OpenAIRE

    Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research

    2011-01-01

    The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...

  10. Avascular Necrosis Is Associated With Increased Transfusions and Readmission Following Primary Total Hip Arthroplasty.

    Science.gov (United States)

    Lovecchio, Francis C; Manalo, John Paul; Demzik, Alysen; Sahota, Shawn; Beal, Matthew; Manning, David

    2017-05-01

    Avascular necrosis (AVN) may confer an increased risk of complications and readmission following total hip arthroplasty (THA). However, current risk-adjustment models do not account for AVN. A total of 1706 patients who underwent THA for AVN from 2011 to 2013 were selected from the American College of Surgeon's National Surgical Quality Improvement Program database and matched 1:1 to controls using a predetermined propensity score algorithm. Rates of 30-day medical and surgical complications, readmissions, and reoperations were compared between cohorts. Propensity-score logistic regression was used to determine independent associations between AVN and outcomes of interest. Patients with AVN had a higher rate of medical complications than those without AVN (20.3% vs 15.3%, respectively; PAvascular necrosis of the femoral head is an independent risk factor for transfusion up to 72 hours postoperatively and readmission up to 30 days following total hip replacement. [Orthopedics. 2017; 40(3):171-176.]. Copyright 2017, SLACK Incorporated.

  11. NEUTRON ALGORITHM VERIFICATION TESTING

    International Nuclear Information System (INIS)

    COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-01-01

    Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility

  12. The International Bleeding Risk Score

    DEFF Research Database (Denmark)

    Laursen, Stig Borbjerg; Laine, L.; Dalton, H.

    2017-01-01

    The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding.......The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding....

  13. Opportunistic splitting for scheduling using a score-based approach

    KAUST Repository

    Rashid, Faraan

    2012-06-01

    We consider the problem of scheduling a user in a multi-user wireless environment in a distributed manner. The opportunistic splitting algorithm is applied to find the best group of users without reporting the channel state information to the centralized scheduler. The users find the best among themselves while requiring just a ternary feedback from the common receiver at the end of each mini-slot. The original splitting algorithm is modified to handle users with asymmetric channel conditions. We use a score-based approach with the splitting algorithm to introduce time and throughput fairness while exploiting the multi-user diversity of the network. Analytical and simulation results are given to show that the modified score-based splitting algorithm works well as a fair scheduling scheme with good spectral efficiency and reduced feedback. © 2012 IEEE.

  14. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  15. A scoring system for ascertainment of incident stroke; the Risk Index Score (RISc).

    Science.gov (United States)

    Kass-Hout, T A; Moyé, L A; Smith, M A; Morgenstern, L B

    2006-01-01

    The main objective of this study was to develop and validate a computer-based statistical algorithm that could be translated into a simple scoring system in order to ascertain incident stroke cases using hospital admission medical records data. The Risk Index Score (RISc) algorithm was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christi (BASIC) project, 2000. The validity of RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment by physician and/or abstractor review of hospital admission records. RISc was developed on 1718 randomly selected patients (training set) and then statistically validated on an independent sample of 858 patients (validation set). A multivariable logistic model was used to develop RISc and subsequently evaluated by goodness-of-fit and receiver operating characteristic (ROC) analyses. The higher the value of RISc, the higher the patient's risk of potential stroke. The study showed RISc was well calibrated and discriminated those who had potential stroke from those that did not on initial screening. In this study we developed and validated a rapid, easy, efficient, and accurate method to ascertain incident stroke cases from routine hospital admission records for epidemiologic investigations. Validation of this scoring system was achieved statistically; however, clinical validation in a community hospital setting is warranted.

  16. Greedy algorithms withweights for construction of partial association rules

    KAUST Repository

    Moshkov, Mikhail; Piliszczu, Marcin; Zielosko, Beata Marta

    2009-01-01

    This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.

  17. Greedy algorithms withweights for construction of partial association rules

    KAUST Repository

    Moshkov, Mikhail

    2009-09-10

    This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.

  18. Do Test Scores Buy Happiness?

    Science.gov (United States)

    McCluskey, Neal

    2017-01-01

    Since at least the enactment of No Child Left Behind in 2002, standardized test scores have served as the primary measures of public school effectiveness. Yet, such scores fail to measure the ultimate goal of education: maximizing happiness. This exploratory analysis assesses nation level associations between test scores and happiness, controlling…

  19. Development of a Simple Clinical Risk Score for Early Prediction of Severe Dengue in Adult Patients.

    Directory of Open Access Journals (Sweden)

    Ing-Kit Lee

    , irrespective of the day of illness onset, suggesting that our simple risk score can be easily implemented in resource-limited countries for early prediction of dengue patients at risk of SD provided that they have rapid dengue confirmed tests. For patients with other acute febrile illnesses or bacterial infections usually have SD risk score of >1. Thus, these scoring algorithms cannot totally replace good clinical judgement of the physician, and most importantly, early differentiating dengue from other febrile illnesses is critical for appropriate monitoring and management.

  20. Predicting occupational personality test scores.

    Science.gov (United States)

    Furnham, A; Drakeley, R

    2000-01-01

    The relationship between students' actual test scores and their self-estimated scores on the Hogan Personality Inventory (HPI; R. Hogan & J. Hogan, 1992), an omnibus personality questionnaire, was examined. Despite being given descriptive statistics and explanations of each of the dimensions measured, the students tended to overestimate their scores; yet all correlations between actual and estimated scores were positive and significant. Correlations between self-estimates and actual test scores were highest for sociability, ambition, and adjustment (r = .62 to r = .67). The results are discussed in terms of employers' use and abuse of personality assessment for job recruitment.

  1. Algorithms for Academic Search and Recommendation Systems

    DEFF Research Database (Denmark)

    Amolochitis, Emmanouil

    2014-01-01

    are part of a developed Movie Recommendation system, the first such system to be commercially deployed in Greece by a major Triple Play services provider. In the third part of the work we present the design of a quantitative association rule mining algorithm. The introduced mining algorithm processes......In this work we present novel algorithms for academic search, recommendation and association rules mining. In the first part of the work we introduce a novel hierarchical heuristic scheme for re-ranking academic publications. The scheme is based on the hierarchical combination of a custom...... implementation of the term frequency heuristic, a time-depreciated citation score and a graph-theoretic computed score that relates the paper’s index terms with each other. On the second part we describe the design of hybrid recommender ensemble (user, item and content based). The newly introduced algorithms...

  2. Lower Bounds to the Reliabilities of Factor Score Estimators.

    Science.gov (United States)

    Hessen, David J

    2016-10-06

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.

  3. A Chemical Risk Ranking and Scoring Method for the Selection of Harmful Substances to be Specially Controlled in Occupational Environments

    Science.gov (United States)

    Shin, Saemi; Moon, Hyung-Il; Lee, Kwon Seob; Hong, Mun Ki; Byeon, Sang-Hoon

    2014-01-01

    This study aimed to devise a method for prioritizing hazardous chemicals for further regulatory action. To accomplish this objective, we chose appropriate indicators and algorithms. Nine indicators from the Globally Harmonized System of Classification and Labeling of Chemicals were used to identify categories to which the authors assigned numerical scores. Exposure indicators included handling volume, distribution, and exposure level. To test the method devised by this study, sixty-two harmful substances controlled by the Occupational Safety and Health Act in Korea, including acrylamide, acrylonitrile, and styrene were ranked using this proposed method. The correlation coefficients between total score and each indicator ranged from 0.160 to 0.641, and those between total score and hazard indicators ranged from 0.603 to 0.641. The latter were higher than the correlation coefficients between total score and exposure indicators, which ranged from 0.160 to 0.421. Correlations between individual indicators were low (−0.240 to 0.376), except for those between handling volume and distribution (0.613), suggesting that each indicator was not strongly correlated. The low correlations between each indicator mean that the indicators and independent and were well chosen for prioritizing harmful chemicals. This method proposed by this study can improve the cost efficiency of chemical management as utilized in occupational regulatory systems. PMID:25419874

  4. A Chemical Risk Ranking and Scoring Method for the Selection of Harmful Substances to be Specially Controlled in Occupational Environments

    Directory of Open Access Journals (Sweden)

    Saemi Shin

    2014-11-01

    Full Text Available This study aimed to devise a method for prioritizing hazardous chemicals for further regulatory action. To accomplish this objective, we chose appropriate indicators and algorithms. Nine indicators from the Globally Harmonized System of Classification and Labeling of Chemicals were used to identify categories to which the authors assigned numerical scores. Exposure indicators included handling volume, distribution, and exposure level. To test the method devised by this study, sixty-two harmful substances controlled by the Occupational Safety and Health Act in Korea, including acrylamide, acrylonitrile, and styrene were ranked using this proposed method. The correlation coefficients between total score and each indicator ranged from 0.160 to 0.641, and those between total score and hazard indicators ranged from 0.603 to 0.641. The latter were higher than the correlation coefficients between total score and exposure indicators, which ranged from 0.160 to 0.421. Correlations between individual indicators were low (−0.240 to 0.376, except for those between handling volume and distribution (0.613, suggesting that each indicator was not strongly correlated. The low correlations between each indicator mean that the indicators and independent and were well chosen for prioritizing harmful chemicals. This method proposed by this study can improve the cost efficiency of chemical management as utilized in occupational regulatory systems.

  5. TUW at the First Total Recall Track

    Science.gov (United States)

    2015-11-20

    TUW AT THE FIRST TOTAL RECALL TRACK MIHAI LUPU Abstract. For the first participation in the TREC Total Recall track, we set out to try some basic...significantly and consistently outperformed it. 1. Introduction As the organizers point out, the focus of the Total Recall Track is to evaluate methods to...TUW AT THE FIRST TOTAL RECALL TRACK 3 The only change we made was at a higher level. The Sofia ML library provides 5 more ML algorithms. The following

  6. Indications for MARS-MRI in Patients Treated With Articular Surface Replacement XL Total Hip Arthroplasty.

    Science.gov (United States)

    Connelly, James W; Galea, Vincent P; Laaksonen, Inari; Matuszak, Sean J; Madanat, Rami; Muratoglu, Orhun; Malchau, Henrik

    2018-04-19

    The purpose of this study was to identify which patient and clinical factors are predictive of adverse local tissue reaction (ALTR) and to use these factors to create a highly sensitive algorithm for indicating metal artifact reduction sequence magnetic resonance imaging (MARS-MRI) in Articular Surface Replacement (ASR) XL total hip arthroplasty patients. Our secondary aim was to compare our algorithm to existing national guidelines on when to take MARS-MRI in metal-on-metal total hip arthroplasty patients. The study consisted of 137 patients treated with unilateral ASR XL implants from a prospective, multicenter study. Patients underwent MARS-MRI regardless of clinical presentation at a mean of 6.2 (range, 3.3-10.4) years from surgery. Univariate and multivariate analyses were conducted to determine which variables were predictive of ALTR. Predictors were used to create an algorithm to indicate MARS-MRI. Finally, we compared our algorithm's ability to detect ALTR to existing guidelines. We found a visual analog scale pain score ≥2 (odds ratio [OR] = 2.53; P = .023), high blood cobalt (OR = 1.05; P = .023), and male gender (OR = 2.37; P = .034) to be significant predictors of ALTR presence in our cohort. The resultant algorithm achieved 86.4% sensitivity and 60.2% specificity in detecting ALTR within our cohort. Our algorithm had the highest area under the curve and was the only guideline that was significantly predictive of ALTR (P = .014). Our algorithm including patient-reported pain and sex-specific cutoffs for blood cobalt levels could predict ALTR and indicate MARS-MRI in our cohort of ASR XL metal-on-metal patients with high sensitivity. Level II, diagnostic study. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  8. Prediction of antigenic epitopes on protein surfaces by consensus scoring

    Directory of Open Access Journals (Sweden)

    Zhang Chi

    2009-09-01

    Full Text Available Abstract Background Prediction of antigenic epitopes on protein surfaces is important for vaccine design. Most existing epitope prediction methods focus on protein sequences to predict continuous epitopes linear in sequence. Only a few structure-based epitope prediction algorithms are available and they have not yet shown satisfying performance. Results We present a new antigen Epitope Prediction method, which uses ConsEnsus Scoring (EPCES from six different scoring functions - residue epitope propensity, conservation score, side-chain energy score, contact number, surface planarity score, and secondary structure composition. Applied to unbounded antigen structures from an independent test set, EPCES was able to predict antigenic eptitopes with 47.8% sensitivity, 69.5% specificity and an AUC value of 0.632. The performance of the method is statistically similar to other published methods. The AUC value of EPCES is slightly higher compared to the best results of existing algorithms by about 0.034. Conclusion Our work shows consensus scoring of multiple features has a better performance than any single term. The successful prediction is also due to the new score of residue epitope propensity based on atomic solvent accessibility.

  9. An Efficient Algorithm for Unconstrained Optimization

    Directory of Open Access Journals (Sweden)

    Sergio Gerardo de-los-Cobos-Silva

    2015-01-01

    Full Text Available This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1 stabilization, (2 breadth-first search, and (3 depth-first search. The proposed algorithm, called PSO-3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.

  10. Answer Extraction Based on Merging Score Strategy of Hot Terms

    Institute of Scientific and Technical Information of China (English)

    LE Juan; ZHANG Chunxia; NIU Zhendong

    2016-01-01

    Answer extraction (AE) is one of the key technologies in developing the open domain Question&an-swer (Q&A) system . Its task is to yield the highest score to the expected answer based on an effective answer score strategy. We introduce an answer extraction method by Merging score strategy (MSS) based on hot terms. The hot terms are defined according to their lexical and syn-tactic features to highlight the role of the question terms. To cope with the syntactic diversities of the corpus, we propose four improved candidate answer score algorithms. Each of them is based on the lexical function of hot terms and their syntactic relationships with the candidate an-swers. Two independent corpus score algorithms are pro-posed to tap the role of the corpus in ranking the candi-date answers. Six algorithms are adopted in MSS to tap the complementary action among the corpus, the candi-date answers and the questions. Experiments demonstrate the effectiveness of the proposed strategy.

  11. Increased discordance between HeartScore and coronary artery calcification score after introduction of the new ESC prevention guidelines

    DEFF Research Database (Denmark)

    Diederichsen, Axel C P; Mahabadi, Amir-Abbas; Gerke, Oke

    2015-01-01

    -contrast Cardiac-CT scan was performed to detect coronary artery calcification (CAC). RESULTS: Agreement of HeartScore risk groups with CAC groups was poor, but higher when applying the algorithm for the low-risk compared to the high-risk country model (agreement rate: 77% versus 63%, and weighted Kappa: 0...

  12. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic

  13. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    Science.gov (United States)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  14. Application of the FOUR Score in Intracerebral Hemorrhage Risk Analysis.

    Science.gov (United States)

    Braksick, Sherri A; Hemphill, J Claude; Mandrekar, Jay; Wijdicks, Eelco F M; Fugate, Jennifer E

    2018-06-01

    The Full Outline of Unresponsiveness (FOUR) Score is a validated scale describing the essentials of a coma examination, including motor response, eye opening and eye movements, brainstem reflexes, and respiratory pattern. We incorporated the FOUR Score into the existing ICH Score and evaluated its accuracy of risk assessment in spontaneous intracerebral hemorrhage (ICH). Consecutive patients admitted to our institution from 2009 to 2012 with spontaneous ICH were reviewed. The ICH Score was calculated using patient age, hemorrhage location, hemorrhage volume, evidence of intraventricular extension, and Glasgow Coma Scale (GCS). The FOUR Score was then incorporated into the ICH Score as a substitute for the GCS (ICH Score FS ). The ability of the 2 scores to predict mortality at 1 month was then compared. In total, 274 patients met the inclusion criteria. The median age was 73 years (interquartile range 60-82) and 138 (50.4%) were male. Overall mortality at 1 month was 28.8% (n = 79). The area under the receiver operating characteristic curve was .91 for the ICH Score and .89 for the ICH Score FS . For ICH Scores of 1, 2, 3, 4, and 5, 1-month mortality was 4.2%, 29.9%, 62.5%, 95.0%, and 100%. In the ICH Score FS model, mortality was 10.7%, 26.5%, 64.5%, 88.9%, and 100% for scores of 1, 2, 3, 4, and 5, respectively. The ICH Score and the ICH Score FS predict 1-month mortality with comparable accuracy. As the FOUR Score provides additional clinical information regarding patient status, it may be a reasonable substitute for the GCS into the ICH Score. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  15. [Propensity score matching in SPSS].

    Science.gov (United States)

    Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli

    2015-11-01

    To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.

  16. [Prognostic scores for pulmonary embolism].

    Science.gov (United States)

    Junod, Alain

    2016-03-23

    Nine prognostic scores for pulmonary embolism (PE), based on retrospective and prospective studies, published between 2000 and 2014, have been analyzed and compared. Most of them aim at identifying PE cases with a low risk to validate their ambulatory care. Important differences in the considered outcomes: global mortality, PE-specific mortality, other complications, sizes of low risk groups, exist between these scores. The most popular score appears to be the PESI and its simplified version. Few good quality studies have tested the applicability of these scores to PE outpatient care, although this approach tends to already generalize in the medical practice.

  17. Parallel Implementation of the Terrain Masking Algorithm

    Science.gov (United States)

    1994-03-01

    contains behavior rules which can define a computation or an algorithm. It can communicate with other process nodes, it can contain local data, and it can...terrain maskirg calculation is being performed. It is this algorithm that comsumes about seventy percent of the total terrain masking calculation time

  18. Calculation of Five Thermodynamic Molecular Descriptors by Means of a General Computer Algorithm Based on the Group-Additivity Method: Standard Enthalpies of Vaporization, Sublimation and Solvation, and Entropy of Fusion of Ordinary Organic Molecules and Total Phase-Change Entropy of Liquid Crystals.

    Science.gov (United States)

    Naef, Rudolf; Acree, William E

    2017-06-25

    The calculation of the standard enthalpies of vaporization, sublimation and solvation of organic molecules is presented using a common computer algorithm on the basis of a group-additivity method. The same algorithm is also shown to enable the calculation of their entropy of fusion as well as the total phase-change entropy of liquid crystals. The present method is based on the complete breakdown of the molecules into their constituting atoms and their immediate neighbourhood; the respective calculations of the contribution of the atomic groups by means of the Gauss-Seidel fitting method is based on experimental data collected from literature. The feasibility of the calculations for each of the mentioned descriptors was verified by means of a 10-fold cross-validation procedure proving the good to high quality of the predicted values for the three mentioned enthalpies and for the entropy of fusion, whereas the predictive quality for the total phase-change entropy of liquid crystals was poor. The goodness of fit ( Q ²) and the standard deviation (σ) of the cross-validation calculations for the five descriptors was as follows: 0.9641 and 4.56 kJ/mol ( N = 3386 test molecules) for the enthalpy of vaporization, 0.8657 and 11.39 kJ/mol ( N = 1791) for the enthalpy of sublimation, 0.9546 and 4.34 kJ/mol ( N = 373) for the enthalpy of solvation, 0.8727 and 17.93 J/mol/K ( N = 2637) for the entropy of fusion and 0.5804 and 32.79 J/mol/K ( N = 2643) for the total phase-change entropy of liquid crystals. The large discrepancy between the results of the two closely related entropies is discussed in detail. Molecules for which both the standard enthalpies of vaporization and sublimation were calculable, enabled the estimation of their standard enthalpy of fusion by simple subtraction of the former from the latter enthalpy. For 990 of them the experimental enthalpy-of-fusion values are also known, allowing their comparison with predictions, yielding a correlation coefficient R

  19. Calculation of Five Thermodynamic Molecular Descriptors by Means of a General Computer Algorithm Based on the Group-Additivity Method: Standard Enthalpies of Vaporization, Sublimation and Solvation, and Entropy of Fusion of Ordinary Organic Molecules and Total Phase-Change Entropy of Liquid Crystals

    Directory of Open Access Journals (Sweden)

    Rudolf Naef

    2017-06-01

    Full Text Available The calculation of the standard enthalpies of vaporization, sublimation and solvation of organic molecules is presented using a common computer algorithm on the basis of a group-additivity method. The same algorithm is also shown to enable the calculation of their entropy of fusion as well as the total phase-change entropy of liquid crystals. The present method is based on the complete breakdown of the molecules into their constituting atoms and their immediate neighbourhood; the respective calculations of the contribution of the atomic groups by means of the Gauss-Seidel fitting method is based on experimental data collected from literature. The feasibility of the calculations for each of the mentioned descriptors was verified by means of a 10-fold cross-validation procedure proving the good to high quality of the predicted values for the three mentioned enthalpies and for the entropy of fusion, whereas the predictive quality for the total phase-change entropy of liquid crystals was poor. The goodness of fit (Q2 and the standard deviation (σ of the cross-validation calculations for the five descriptors was as follows: 0.9641 and 4.56 kJ/mol (N = 3386 test molecules for the enthalpy of vaporization, 0.8657 and 11.39 kJ/mol (N = 1791 for the enthalpy of sublimation, 0.9546 and 4.34 kJ/mol (N = 373 for the enthalpy of solvation, 0.8727 and 17.93 J/mol/K (N = 2637 for the entropy of fusion and 0.5804 and 32.79 J/mol/K (N = 2643 for the total phase-change entropy of liquid crystals. The large discrepancy between the results of the two closely related entropies is discussed in detail. Molecules for which both the standard enthalpies of vaporization and sublimation were calculable, enabled the estimation of their standard enthalpy of fusion by simple subtraction of the former from the latter enthalpy. For 990 of them the experimental enthalpy-of-fusion values are also known, allowing their comparison with predictions, yielding a correlation

  20. Empirical scoring functions for advanced protein-ligand docking with PLANTS.

    Science.gov (United States)

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2009-01-01

    In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.

  1. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  2. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  3. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  4. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  5. Total variation-based neutron computed tomography

    Science.gov (United States)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  6. D-score: a search engine independent MD-score.

    Science.gov (United States)

    Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P

    2013-03-01

    While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Trends in Classroom Observation Scores

    Science.gov (United States)

    Casabianca, Jodi M.; Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Observations and ratings of classroom teaching and interactions collected over time are susceptible to trends in both the quality of instruction and rater behavior. These trends have potential implications for inferences about teaching and for study design. We use scores on the Classroom Assessment Scoring System-Secondary (CLASS-S) protocol from…

  8. Quadratic prediction of factor scores

    NARCIS (Netherlands)

    Wansbeek, T

    1999-01-01

    Factor scores are naturally predicted by means of their conditional expectation given the indicators y. Under normality this expectation is linear in y but in general it is an unknown function of y. II is discussed that under nonnormality factor scores can be more precisely predicted by a quadratic

  9. The Machine Scoring of Writing

    Science.gov (United States)

    McCurry, Doug

    2010-01-01

    This article provides an introduction to the kind of computer software that is used to score student writing in some high stakes testing programs, and that is being promoted as a teaching and learning tool to schools. It sketches the state of play with machines for the scoring of writing, and describes how these machines work and what they do.…

  10. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  11. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang

    2010-01-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed

  12. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  13. Dietary Screener Questionnaire in the NHIS CCS 2010: Data Processing and Scoring Procedures

    Science.gov (United States)

    Our NCI research team followed several steps to formulate the Dietary Screener Questionnaire (DSQ) scoring algorithms. These steps are described for researchers who may be interested in the methodologic process our team used.

  14. Dietary Screener Questionnaire in the NHIS CCS 2015: Data Processing and Scoring Procedures

    Science.gov (United States)

    Our NCI research team followed several steps to formulate the Dietary Screener Questionnaire (DSQ) scoring algorithms. These steps are described for researchers who may be interested in the methodologic process our team used.

  15. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    International Nuclear Information System (INIS)

    Smarda, M; Alexopoulou, E; Mazioti, A; Kordolaimi, S; Ploussi, A; Efstathopoulos, E; Priftis, K

    2015-01-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions. (paper)

  16. Extension of the lod score: the mod score.

    Science.gov (United States)

    Clerget-Darpoux, F

    2001-01-01

    In 1955 Morton proposed the lod score method both for testing linkage between loci and for estimating the recombination fraction between them. If a disease is controlled by a gene at one of these loci, the lod score computation requires the prior specification of an underlying model that assigns the probabilities of genotypes from the observed phenotypes. To address the case of linkage studies for diseases with unknown mode of inheritance, we suggested (Clerget-Darpoux et al., 1986) extending the lod score function to a so-called mod score function. In this function, the variables are both the recombination fraction and the disease model parameters. Maximizing the mod score function over all these parameters amounts to maximizing the probability of marker data conditional on the disease status. Under the absence of linkage, the mod score conforms to a chi-square distribution, with extra degrees of freedom in comparison to the lod score function (MacLean et al., 1993). The mod score is asymptotically maximum for the true disease model (Clerget-Darpoux and Bonaïti-Pellié, 1992; Hodge and Elston, 1994). Consequently, the power to detect linkage through mod score will be highest when the space of models where the maximization is performed includes the true model. On the other hand, one must avoid overparametrization of the model space. For example, when the approach is applied to affected sibpairs, only two constrained disease model parameters should be used (Knapp et al., 1994) for the mod score maximization. It is also important to emphasize the existence of a strong correlation between the disease gene location and the disease model. Consequently, there is poor resolution of the location of the susceptibility locus when the disease model at this locus is unknown. Of course, this is true regardless of the statistics used. The mod score may also be applied in a candidate gene strategy to model the potential effect of this gene in the disease. Since, however, it

  17. Risk Factors for the Failure of Spinal Burst Fractures Treated Conservatively According to the Thoracolumbar Injury Classification and Severity Score (TLICS: A Retrospective Cohort Trial.

    Directory of Open Access Journals (Sweden)

    Jieliang Shen

    Full Text Available The management of thoracolumbar (TL burst fractures is still controversial. The thoracolumbar injury classification and severity score (TLICS algorithm is now widely used to guide clinical decision making, however, in clinical practice, we come to realize that TLICS also has its limitations for treating patients with total scores less than 4, for which conservative treatment may not be optimal in all cases.The aim of this study is to identify several risk factors for the failure of conservative treatment of TL burst fractures according to TLICS algorithm.From June 2008 to December 2013, a cohort of 129 patients with T10-l2 TL burst fractures with a TLISC score ≤3 treated non-operatively were identified and included into this retrospective study. Age, sex, pain intensity, interpedicular distance (IPD, canal compromise, loss of vertebral body height and kyphotic angle (KA were selected as potential risk factors and compared between the non-operative success group and the non-operative failure group.One hundred and four patients successfully completed non-operative treatment, the other 25 patients were converted to surgical treatment because of persistent local back pain or progressive neurological deficits during follow-up. Our results showed that age, visual analogue scale (VAS score and IPD, KA were significantly different between the two groups. Furthermore, regression analysis indicated that VAS score and IPD could be considered as significant predictors for the failure of conservative treatment.The recommendation of non-operative treatment for TLICS score ≤3 has limitations in some patients, and VAS score and IPD could be considered as risk factors for the failure of conservative treatment. Thus, conservative treatment should be decided with caution in patients with greater VAS scores or IPD. If non-operative management is decided, a close follow-up is necessary.

  18. Scoring best-worst data in unbalanced many-item designs, with applications to crowdsourcing semantic judgments.

    Science.gov (United States)

    Hollis, Geoff

    2018-04-01

    Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.

  19. Ripasa score: a new diagnostic score for diagnosis of acute appendicitis

    International Nuclear Information System (INIS)

    Butt, M.Q.

    2014-01-01

    Objective: To determine the usefulness of RIPASA score for the diagnosis of acute appendicitis using histopathology as a gold standard. Study Design: Cross-sectional study. Place and Duration of Study: Department of General Surgery, Combined Military Hospital, Kohat, from September 2011 to March 2012. Methodology: A total of 267 patients were included in this study. RIPASA score was assessed. The diagnosis of appendicitis was made clinically aided by routine sonography of abdomen. After appendicectomies, resected appendices were sent for histopathological examination. The 15 parameters and the scores generated were age (less than 40 years = 1 point; greater than 40 years = 0.5 point), gender (male = 1 point; female = 0.5 point), Right Iliac Fossa (RIF) pain (0.5 point), migration of pain to RIF (0.5 point), nausea and vomiting (1 point), anorexia (1 point), duration of symptoms (less than 48 hours = 1 point; more than 48 hours = 0.5 point), RIF tenderness (1 point), guarding (2 points), rebound tenderness (1 point), Rovsing's sign (2 points), fever (1 point), raised white cell count (1 point), negative urinalysis (1 point) and foreign national registration identity card (1 point). The optimal cut-off threshold score from the ROC was 7.5. Sensitivity analysis was done. Results: Out of 267 patients, 156 (58.4%) were male while remaining 111 patients (41.6%) were female with mean age of 23.5 +- 9.1 years. Sensitivity of RIPASA score was 96.7%, specificity 93.0%, diagnostic accuracy was 95.1%, positive predictive value was 94.8% and negative predictive value was 95.54%. Conclusion: RIPASA score at a cut-off total score of 7.5 was a useful tool to diagnose appendicitis, in equivocal cases of pain. (author)

  20. A new algorithm for hip fracture surgery

    DEFF Research Database (Denmark)

    Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim

    2012-01-01

    Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...

  1. Description and validation of a scoring system for tomosynthesis in pulmonary cystic fibrosis

    Energy Technology Data Exchange (ETDEWEB)

    Vult von Steyern, Kristina; Bjoerkman-Burtscher, Isabella M.; Bozovic, Gracijela; Wiklund, Marie; Geijer, Mats [Skaane University Hospital, Lund University, Centre for Medical Imaging and Physiology, Lund (Sweden); Hoeglund, Peter [Skaane University Hospital, Competence Centre for Clinical Research, Lund (Sweden)

    2012-12-15

    To design and validate a scoring system for tomosynthesis (digital tomography) in pulmonary cystic fibrosis. A scoring system dedicated to tomosynthesis in pulmonary cystic fibrosis was designed. Three radiologists independently scored 88 pairs of radiographs and tomosynthesis examinations of the chest in 60 patients with cystic fibrosis and 7 oncology patients. Radiographs were scored according to the Brasfield scoring system and tomosynthesis examinations were scored using the new scoring system. Observer agreements for the tomosynthesis score were almost perfect for the total score with square-weighted kappa >0.90, and generally substantial to almost perfect for subscores. Correlation between the tomosynthesis score and the Brasfield score was good for the three observers (Kendall's rank correlation tau 0.68, 0.77 and 0.78). Tomosynthesis was generally scored higher as a percentage of the maximum score. Observer agreements for the total score for Brasfield score were almost perfect (square-weighted kappa 0.80, 0.81 and 0.85). The tomosynthesis scoring system seems robust and correlates well with the Brasfield score. Compared with radiography, tomosynthesis is more sensitive to cystic fibrosis changes, especially bronchiectasis and mucus plugging, and the new tomosynthesis scoring system offers the possibility of more detailed and accurate scoring of disease severity. (orig.)

  2. Description and validation of a scoring system for tomosynthesis in pulmonary cystic fibrosis.

    Science.gov (United States)

    Vult von Steyern, Kristina; Björkman-Burtscher, Isabella M; Höglund, Peter; Bozovic, Gracijela; Wiklund, Marie; Geijer, Mats

    2012-12-01

    To design and validate a scoring system for tomosynthesis (digital tomography) in pulmonary cystic fibrosis. A scoring system dedicated to tomosynthesis in pulmonary cystic fibrosis was designed. Three radiologists independently scored 88 pairs of radiographs and tomosynthesis examinations of the chest in 60 patients with cystic fibrosis and 7 oncology patients. Radiographs were scored according to the Brasfield scoring system and tomosynthesis examinations were scored using the new scoring system. Observer agreements for the tomosynthesis score were almost perfect for the total score with square-weighted kappa >0.90, and generally substantial to almost perfect for subscores. Correlation between the tomosynthesis score and the Brasfield score was good for the three observers (Kendall's rank correlation tau 0.68, 0.77 and 0.78). Tomosynthesis was generally scored higher as a percentage of the maximum score. Observer agreements for the total score for Brasfield score were almost perfect (square-weighted kappa 0.80, 0.81 and 0.85). The tomosynthesis scoring system seems robust and correlates well with the Brasfield score. Compared with radiography, tomosynthesis is more sensitive to cystic fibrosis changes, especially bronchiectasis and mucus plugging, and the new tomosynthesis scoring system offers the possibility of more detailed and accurate scoring of disease severity. Tomosynthesis is more sensitive than conventional radiography for pulmonary cystic fibrosis changes. The radiation dose from chest tomosynthesis is low compared with computed tomography. Tomosynthesis may become useful in the regular follow-up of patients with cystic fibrosis.

  3. Description and validation of a scoring system for tomosynthesis in pulmonary cystic fibrosis

    International Nuclear Information System (INIS)

    Vult von Steyern, Kristina; Bjoerkman-Burtscher, Isabella M.; Bozovic, Gracijela; Wiklund, Marie; Geijer, Mats; Hoeglund, Peter

    2012-01-01

    To design and validate a scoring system for tomosynthesis (digital tomography) in pulmonary cystic fibrosis. A scoring system dedicated to tomosynthesis in pulmonary cystic fibrosis was designed. Three radiologists independently scored 88 pairs of radiographs and tomosynthesis examinations of the chest in 60 patients with cystic fibrosis and 7 oncology patients. Radiographs were scored according to the Brasfield scoring system and tomosynthesis examinations were scored using the new scoring system. Observer agreements for the tomosynthesis score were almost perfect for the total score with square-weighted kappa >0.90, and generally substantial to almost perfect for subscores. Correlation between the tomosynthesis score and the Brasfield score was good for the three observers (Kendall's rank correlation tau 0.68, 0.77 and 0.78). Tomosynthesis was generally scored higher as a percentage of the maximum score. Observer agreements for the total score for Brasfield score were almost perfect (square-weighted kappa 0.80, 0.81 and 0.85). The tomosynthesis scoring system seems robust and correlates well with the Brasfield score. Compared with radiography, tomosynthesis is more sensitive to cystic fibrosis changes, especially bronchiectasis and mucus plugging, and the new tomosynthesis scoring system offers the possibility of more detailed and accurate scoring of disease severity. (orig.)

  4. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  5. Application of independent component analysis for speech-music separation using an efficient score function estimation

    Science.gov (United States)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  6. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  7. Shower reconstruction in TUNKA-HiSCORE

    Energy Technology Data Exchange (ETDEWEB)

    Porelli, Andrea; Wischnewski, Ralf [DESY-Zeuthen, Platanenallee 6, 15738 Zeuthen (Germany)

    2015-07-01

    The Tunka-HiSCORE detector is a non-imaging wide-angle EAS cherenkov array designed as an alternative technology for gamma-ray physics above 10 TeV and to study spectrum and composition of cosmic rays above 100 TeV. An engineering array with nine stations (HiS-9) has been deployed in October 2013 on the site of the Tunka experiment in Russia. In November 2014, 20 more HiSCORE stations have been installed, covering a total array area of 0.24 square-km. We describe the detector setup, the role of precision time measurement, and give results from the innovative WhiteRabbit time synchronization technology. Results of air shower reconstruction are presented and compared with MC simulations, for both the HiS-9 and the HiS-29 detector arrays.

  8. Nursing Activities Score and Acute Kidney Injury

    Directory of Open Access Journals (Sweden)

    Filipe Utuari de Andrade Coelho

    Full Text Available ABSTRACT Objective: to evaluate the nursing workload in intensive care patients with acute kidney injury (AKI. Method: A quantitative study, conducted in an intensive care unit, from April to August of 2015. The Nursing Activities Score (NAS and Kidney Disease Improving Global Outcomes (KDIGO were used to measure nursing workload and to classify the stage of AKI, respectively. Results: A total of 190 patients were included. Patients who developed AKI (44.2% had higher NAS when compared to those without AKI (43.7% vs 40.7%, p <0.001. Patients with stage 1, 2 and 3 AKI showed higher NAS than those without AKI. A relationship was identified between stage 2 and 3 with those without AKI (p = 0.002 and p <0.001. Conclusion: The NAS was associated with the presence of AKI, the score increased with the progression of the stages, and it was associated with AKI, stage 2 and 3.

  9. Evaluation of modified Alvarado scoring system and RIPASA scoring system as diagnostic tools of acute appendicitis.

    Science.gov (United States)

    Shuaib, Abdullah; Shuaib, Ali; Fakhra, Zainab; Marafi, Bader; Alsharaf, Khalid; Behbehani, Abdullah

    2017-01-01

    Acute appendicitis is the most common surgical condition presented in emergency departments worldwide. Clinical scoring systems, such as the Alvarado and modified Alvarado scoring systems, were developed with the goal of reducing the negative appendectomy rate to 5%-10%. The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) scoring system was established in 2008 specifically for Asian populations. The aim of this study was to compare the modified Alvarado with the RIPASA scoring system in Kuwait population. This study included 180 patients who underwent appendectomies and were documented as having "acute appendicitis" or "abdominal pain" in the operating theatre logbook (unit B) from November 2014 to March 2016. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), diagnostic accuracy, predicted negative appendectomy and receiver operating characteristic (ROC) curve of the modified Alvarado and RIPASA scoring systems were derived using SPSS statistical software. A total of 136 patients were included in this study according to our criteria. The cut-off threshold point of the modified Alvarado score was set at 7.0, which yielded a sensitivity of 82.8% and a specificity of 56%. The PPV was 89.3% and the NPV was 42.4%. The cut-off threshold point of the RIPASA score was set at 7.5, which yielded a 94.5% sensitivity and an 88% specificity. The PPV was 97.2% and the NPV was 78.5%. The predicted negative appendectomy rates were 10.7% and 2.2% for the modified Alvarado and RIPASA scoring systems, respectively. The negative appendectomy rate decreased significantly, from 18.4% to 10.7% for the modified Alvarado, and to 2.2% for the RIPASA scoring system, which was a significant difference (PAsian populations. It consists of 14 clinical parameters that can be obtained from a good patient history, clinical examination and laboratory investigations. The RIPASA scoring system is more accurate and specific than the modified Alvarado

  10. Re-Scoring the Game’s Score

    DEFF Research Database (Denmark)

    Gasselseder, Hans-Peter

    2014-01-01

    This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self-report questionnai......This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self......-temporal alignment in the resulting emotional congruency of nondiegetic music. Whereas imaginary aspects of immersive presence are systemically affected by the presentation of dynamic music, sensory spatial aspects show higher sensitivity towards the arousal potential of the music score. It is argued...

  11. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  12. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  13. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  14. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  15. The clinical performance of an office-based risk scoring system for fatal cardiovascular diseases in North-East of Iran.

    Directory of Open Access Journals (Sweden)

    Sadaf G Sepanlou

    Full Text Available Cardiovascular diseases (CVD are becoming major causes of death in developing countries. Risk scoring systems for CVD are needed to prioritize allocation of limited resources. Most of these risk score algorithms have been based on a long array of risk factors including blood markers of lipids. However, risk scoring systems that solely use office-based data, not including laboratory markers, may be advantageous. In the current analysis, we validated the office-based Framingham risk scoring system in Iran.The study used data from the Golestan Cohort in North-East of Iran. The following risk factors were used in the development of the risk scoring method: sex, age, body mass index, systolic blood pressure, hypertension treatment, current smoking, and diabetes. Cardiovascular risk functions for prediction of 10-year risk of fatal CVDs were developed.A total of 46,674 participants free of CVD at baseline were included. Predictive value of estimated risks was examined. The resulting Area Under the ROC Curve (AUC was 0.774 (95% CI: 0.762-0.787 in all participants, 0.772 (95% CI: 0.753-0.791 in women, and 0.763 (95% CI: 0.747-0.779 in men. AUC was higher in urban areas (0.790, 95% CI: 0.766-0.815. The predicted and observed risks of fatal CVD were similar in women. However, in men, predicted probabilities were higher than observed.The AUC in the current study is comparable to results of previous studies while lipid profile was replaced by body mass index to develop an office-based scoring system. This scoring algorithm is capable of discriminating individuals at high risk versus low risk of fatal CVD.

  16. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  17. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  18. Unsupervised online classifier in sleep scoring for sleep deprivation studies.

    Science.gov (United States)

    Libourel, Paul-Antoine; Corneyllie, Alexandra; Luppi, Pierre-Hervé; Chouvet, Guy; Gervasoni, Damien

    2015-05-01

    This study was designed to evaluate an unsupervised adaptive algorithm for real-time detection of sleep and wake states in rodents. We designed a Bayesian classifier that automatically extracts electroencephalogram (EEG) and electromyogram (EMG) features and categorizes non-overlapping 5-s epochs into one of the three major sleep and wake states without any human supervision. This sleep-scoring algorithm is coupled online with a new device to perform selective paradoxical sleep deprivation (PSD). Controlled laboratory settings for chronic polygraphic sleep recordings and selective PSD. Ten adult Sprague-Dawley rats instrumented for chronic polysomnographic recordings. The performance of the algorithm is evaluated by comparison with the score obtained by a human expert reader. Online detection of PS is then validated with a PSD protocol with duration of 72 hours. Our algorithm gave a high concordance with human scoring with an average κ coefficient > 70%. Notably, the specificity to detect PS reached 92%. Selective PSD using real-time detection of PS strongly reduced PS amounts, leaving only brief PS bouts necessary for the detection of PS in EEG and EMG signals (4.7 ± 0.7% over 72 h, versus 8.9 ± 0.5% in baseline), and was followed by a significant PS rebound (23.3 ± 3.3% over 150 minutes). Our fully unsupervised data-driven algorithm overcomes some limitations of the other automated methods such as the selection of representative descriptors or threshold settings. When used online and coupled with our sleep deprivation device, it represents a better option for selective PSD than other methods like the tedious gentle handling or the platform method. © 2015 Associated Professional Sleep Societies, LLC.

  19. Comparing the Scoring of Human Decomposition from Digital Images to Scoring Using On-site Observations.

    Science.gov (United States)

    Dabbs, Gretchen R; Bytheway, Joan A; Connor, Melissa

    2017-09-01

    When in forensic casework or empirical research in-person assessment of human decomposition is not possible, the sensible substitution is color photographic images. To date, no research has confirmed the utility of color photographic images as a proxy for in situ observation of the level of decomposition. Sixteen observers scored photographs of 13 human cadavers in varying decomposition stages (PMI 2-186 days) using the Total Body Score system (total n = 929 observations). The on-site TBS was compared with recorded observations from digital color images using a paired samples t-test. The average difference between on-site and photographic observations was -0.20 (t = -1.679, df = 928, p = 0.094). Individually, only two observers, both students with human decomposition based on digital images can be substituted for assessments based on observation of the corpse in situ, when necessary. © 2017 American Academy of Forensic Sciences.

  20. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  1. Skin scoring in systemic sclerosis

    DEFF Research Database (Denmark)

    Zachariae, Hugh; Bjerring, Peter; Halkier-Sørensen, Lars

    1994-01-01

    Forty-one patients with systemic sclerosis were investigated with a new and simple skin score method measuring the degree of thickening and pliability in seven regions together with area involvement in each region. The highest values were, as expected, found in diffuse cutaneous systemic sclerosis...... (type III SS) and the lowest in limited cutaneous systemic sclerosis (type I SS) with no lesions extending above wrists and ancles. A positive correlation was found to the aminoterminal propeptide of type III procollagen, a serological marker for synthesis of type III collagen. The skin score...

  2. Mobile health technology transforms injury severity scoring in South Africa.

    Science.gov (United States)

    Spence, Richard Trafford; Zargaran, Eiman; Hameed, S Morad; Navsaria, Pradeep; Nicol, Andrew

    2016-08-01

    The burden of data collection associated with injury severity scoring has limited its application in areas of the world with the highest incidence of trauma. Since January 2014, electronic records (electronic Trauma Health Records [eTHRs]) replaced all handwritten records at the Groote Schuur Hospital Trauma Unit in South Africa. Data fields required for Glasgow Coma Scale, Revised Trauma Score, Kampala Trauma Score, Injury Severity Score (ISS), and Trauma Score-Injury Severity Score calculations are now prospectively collected. Fifteen months after implementation of eTHR, the injury severity scores were compared as predictors of mortality on three accounts: (1) ability to discriminate (area under receiver operating curve, ROC); (2) ability to calibrate (observed versus expected ratio, O/E); and (3) feasibility of data collection (rate of missing data). A total of 7460 admissions were recorded by eTHR from April 1, 2014 to July 7, 2015, including 770 severely injured patients (ISS > 15) and 950 operations. The mean age was 33.3 y (range 13-94), 77.6% were male, and the mechanism of injury was penetrating in 39.3% of cases. The cohort experienced a mortality rate of 2.5%. Patient reserve predictors required by the scores were 98.7% complete, physiological injury predictors were 95.1% complete, and anatomic injury predictors were 86.9% complete. The discrimination and calibration of Trauma Score-Injury Severity Score was superior for all admissions (ROC 0.9591 and O/E 1.01) and operatively managed patients (ROC 0.8427 and O/E 0.79). In the severely injured cohort, the discriminatory ability of Revised Trauma Score was superior (ROC 0.8315), but no score provided adequate calibration. Emerging mobile health technology enables reliable and sustainable injury severity scoring in a high-volume trauma center in South Africa. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  4. Outcomes of Total Knee Arthroplasty in Patients With Poliomyelitis.

    Science.gov (United States)

    Gan, Zhi-Wei Jonathan; Pang, Hee Nee

    2016-11-01

    We report our experience with outcomes of poliomyelitis in the Asian population. Sixteen total knee replacements in 14 patients with polio-affected knees were followed up for at least 18 months. Follow-up assessment included scoring with the American Knee Society Score (AKSS), Oxford knee score, and Short Form 36 Health Survey scores. The mean AKSS improved from 25.59 preoperatively to 82.94 at 24 months, with greater improvement in the knee score. The mean Oxford knee score improved from 40.82 preoperatively to 20.53 at 24 months. The mean AKSS pain score rose from 2.35 to 47.66 at 24 months. The Short Form 36 Health Survey physical functioning and bodily pain scores improved for all patients. Primary total knee arthroplasty of poliomyelitis-affected limbs shows good outcomes, improving quality of life, and decreasing pain. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. From scores to face templates: a model-based approach.

    Science.gov (United States)

    Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar

    2007-12-01

    Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With

  6. Using general-purpose compression algorithms for music analysis

    DEFF Research Database (Denmark)

    Louboutin, Corentin; Meredith, David

    2016-01-01

    General-purpose compression algorithms encode files as dictionaries of substrings with the positions of these strings’ occurrences. We hypothesized that such algorithms could be used for pattern discovery in music. We compared LZ77, LZ78, Burrows–Wheeler and COSIATEC on classifying folk song...... in the input data, COSIATEC outperformed LZ77 with a mean F1 score of 0.123, compared with 0.053 for LZ77. However, when the music was processed a voice at a time, the F1 score for LZ77 more than doubled to 0.124. We also discovered a significant correlation between compression factor and F1 score for all...

  7. Algorithmic approach to patients presenting with heartburn and epigastric pain refractory to empiric proton pump inhibitor therapy.

    Science.gov (United States)

    Roorda, Andrew K; Marcus, Samuel N; Triadafilopoulos, George

    2011-10-01

    Reflux-like dyspepsia (RLD), where predominant epigastric pain is associated with heartburn and/or regurgitation, is a common clinical syndrome in both primary and specialty care. Because symptom frequency and severity vary, overlap among gastroesophageal reflux disease (GERD), non-erosive reflux disease (NERD), and RLD, is quite common. The chronic and recurrent nature of RLD and its variable response to proton pump inhibitor (PPI) therapy remain problematic. To examine the prevalence of GERD, NERD, and RLD in a community setting using an algorithmic approach and to assess the potential, reproducibility, and validity of a multi-factorial scoring system in discriminating patients with RLD from those with GERD or NERD. Using a novel algorithmic approach, we evaluated an outpatient, community-based cohort referred to a gastroenterologist because of epigastric pain and heartburn that were only partially relieved by PPI. After an initial symptom evaluation (for epigastric pain, heartburn, regurgitation, dysphagia), an endoscopy and distal esophageal biopsies were performed, followed by esophageal motility and 24-h ambulatory pH monitoring to assess esophageal function and pathological acid exposure. A scoring system based on presence of symptoms and severity of findings was devised. Data was collected in two stages: subjects in the first stage were designated as the derivation cohort; subjects in the second stage were labeled the validation cohort. The total cohort comprised 159 patients (59 males, 100 females; mean age 52). On endoscopy, 30 patients (19%) had complicated esophagitis (CE) and 11 (7%) had Barrett's esophagus (BE) and were classified collectively as patients with GERD. One-hundred and eighteen (74%) patients had normal esophagus. Of these, 94 (59%) had one or more of the following: hiatal hernia, positive biopsy, abnormal pH, and/or abnormal motility studies and were classified as patients with NERD. The remaining 24 patients (15%) had normal functional

  8. The persistence of depression score

    NARCIS (Netherlands)

    Spijker, J.; de Graaf, R.; Ormel, J.; Nolen, W. A.; Grobbee, D. E.; Burger, H.

    2006-01-01

    Objective: To construct a score that allows prediction of major depressive episode (MDE) persistence in individuals with MDE using determinants of persistence identified in previous research. Method: Data were derived from 250 subjects from the general population with new MDE according to DSM-III-R.

  9. Score distributions in information retrieval

    NARCIS (Netherlands)

    Arampatzis, A.; Robertson, S.; Kamps, J.

    2009-01-01

    We review the history of modeling score distributions, focusing on the mixture of normal-exponential by investigating the theoretical as well as the empirical evidence supporting its use. We discuss previously suggested conditions which valid binary mixture models should satisfy, such as the

  10. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Gui Bo

    2008-01-01

    Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  11. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Bo Gui

    2007-12-01

    Full Text Available We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  12. Quantum algorithms for testing Boolean functions

    Directory of Open Access Journals (Sweden)

    Erika Andersson

    2010-06-01

    Full Text Available We discuss quantum algorithms, based on the Bernstein-Vazirani algorithm, for finding which variables a Boolean function depends on. There are 2^n possible linear Boolean functions of n variables; given a linear Boolean function, the Bernstein-Vazirani quantum algorithm can deterministically identify which one of these Boolean functions we are given using just one single function query. The same quantum algorithm can also be used to learn which input variables other types of Boolean functions depend on, with a success probability that depends on the form of the Boolean function that is tested, but does not depend on the total number of input variables. We also outline a procedure to futher amplify the success probability, based on another quantum algorithm, the Grover search.

  13. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  14. Does present use of cardiovascular medication reflect elevated cardiovascular risk scores estimated ten years ago? A population based longitudinal observational study

    Directory of Open Access Journals (Sweden)

    Straand Jørund

    2011-03-01

    Full Text Available Abstract Background It is desirable that those at highest risk of cardiovascular disease should have priority for preventive measures, eg. treatment with prescription drugs to modify their risk. We wanted to investigate to what extent present use of cardiovascular medication (CVM correlates with cardiovascular risk estimated by three different risk scores (Framingham, SCORE and NORRISK ten years ago. Methods Prospective logitudinal observational study of 20 252 participants in The Hordaland Health Study born 1950-57, not using CVM in 1997-99. Prescription data obtained from The Norwegian Prescription Database in 2008. Results 26% of men and 22% of women aged 51-58 years had started to use some CVM during the previous decade. As a group, persons using CVM scored significantly higher on the risk algorithms Framingham, SCORE and NORRISK compared to those not treated. 16-20% of men and 20-22% of women with risk scores below the high-risk thresholds for the three risk scores were treated with CVM, while 60-65% of men and 25-45% of women with scores above the high-risk thresholds received no treatment. Among women using CVM, only 2.2% (NORRISK, 4.4% (SCORE and 14.5% (Framingham had risk scores above the high-risk values. Low education, poor self-reported general health, muscular pains, mental distress (in females only and a family history of premature cardiovascular disease correlated with use of CVM. Elevated blood pressure was the single factor most strongly predictive of CVM treatment. Conclusion Prescription of CVM to middle-aged individuals by large seems to occur independently of estimated total cardiovascular risk, and this applies especially to females.

  15. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  16. Gleason Score Correlation Between Prostate Biopsy and Radical Prostatectomy Specimens

    Directory of Open Access Journals (Sweden)

    Erdem Öztürk

    2018-04-01

    Full Text Available Objective: Prostate cancer is the most common malignancy in men and the second cause of cancer-related mortality. Prostate biopsy and the Gleason score guide treatment decisions in prostate cancer. Several studies have investigated the correlation between biopsy scores and radical prostatectomy specimen scores. We also evaluated the correlation of Gleason scores of these specimens in our patient series. Materials and Methods: We retrospectively reviewed the data of 468 men who were diagnosed with prostate cancer and underwent radical prostatectomy between 2008 and 2017. Patients’ age, prostate-specific antigen levels at diagnosis, and prostate biopsy and radical prostatectomy specimen Gleason scores were recorded. Upgrading and downgrading were defined as increase or decrease of Gleason score of radical prostate specimen compared to Gleason score of prostate biopsy. Results: A total of 442 men diagnosed with prostate cancer were included in the study. The mean age of the patients was 62.62±6.26 years (44-84 years and mean prostate specific antigen level was 9.01±6.84 ng/mL (1.09-49 ng/mL. Prostate biopsy Gleason score was 7 in 27 (6.1% men. Radical prostatectomy specimen Gleason score was 7 in 62 (14% men. Gleason correlation was highest in the 240 patients (71.6% with score <7 and was lowest in the 31 (38.75% patients with score =7. Conclusion: This study demonstrated that the discordance rate between Gleason scores of prostate biopsy and radical prostatectomy specimens was 35.7%.

  17. Pocket total dose meter

    International Nuclear Information System (INIS)

    Brackenbush, L.W.; Endres, G.W.R.

    1984-10-01

    Laboratory measurements have demonstrated that it is possible to simultaneously measure absorbed dose and dose equivalent using a single tissue equivalent proportional counter. Small, pocket sized instruments are being developed to determine dose equivalent as the worker is exposed to mixed field radiation. This paper describes the electronic circuitry and computer algorithms used to determine dose equivalent in these devices

  18. Parallel Evolutionary Optimization Algorithms for Peptide-Protein Docking

    Science.gov (United States)

    Poluyan, Sergey; Ershov, Nikolay

    2018-02-01

    In this study we examine the possibility of using evolutionary optimization algorithms in protein-peptide docking. We present the main assumptions that reduce the docking problem to a continuous global optimization problem and provide a way of using evolutionary optimization algorithms. The Rosetta all-atom force field was used for structural representation and energy scoring. We describe the parallelization scheme and MPI/OpenMP realization of the considered algorithms. We demonstrate the efficiency and the performance for some algorithms which were applied to a set of benchmark tests.

  19. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    Science.gov (United States)

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  1. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  2. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  3. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  4. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  5. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  6. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  7. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  8. Perturbation resilience and superiorization of iterative algorithms

    International Nuclear Information System (INIS)

    Censor, Y; Davidi, R; Herman, G T

    2010-01-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image

  9. Automatic ECG quality scoring methodology: mimicking human annotators

    International Nuclear Information System (INIS)

    Johannesen, Lars; Galeotti, Loriano

    2012-01-01

    An algorithm to determine the quality of electrocardiograms (ECGs) can enable inexperienced nurses and paramedics to record ECGs of sufficient diagnostic quality. Previously, we proposed an algorithm for determining if ECG recordings are of acceptable quality, which was entered in the PhysioNet Challenge 2011. In the present work, we propose an improved two-step algorithm, which first rejects ECGs with macroscopic errors (signal absent, large voltage shifts or saturation) and subsequently quantifies the noise (baseline, powerline or muscular noise) on a continuous scale. The performance of the improved algorithm was evaluated using the PhysioNet Challenge database (1500 ECGs rated by humans for signal quality). We achieved a classification accuracy of 92.3% on the training set and 90.0% on the test set. The improved algorithm is capable of detecting ECGs with macroscopic errors and giving the user a score of the overall quality. This allows the user to assess the degree of noise and decide if it is acceptable depending on the purpose of the recording. (paper)

  10. Total parenteral nutrition - infants

    Science.gov (United States)

    ... medlineplus.gov/ency/article/007239.htm Total parenteral nutrition - infants To use the sharing features on this page, please enable JavaScript. Total parenteral nutrition (TPN) is a method of feeding that bypasses ...

  11. Total parenteral nutrition

    Science.gov (United States)

    ... medlineplus.gov/ency/patientinstructions/000177.htm Total parenteral nutrition To use the sharing features on this page, please enable JavaScript. Total parenteral nutrition (TPN) is a method of feeding that bypasses ...

  12. Technique of total thyroidectomy

    International Nuclear Information System (INIS)

    Rao, R.S.

    1999-01-01

    It is essential to define the various surgical procedures that are carried out for carcinoma of the thyroid gland. They are thyroid gland, subtotal lobectomy, total thyroidectomy and near total thyroidectomy

  13. Total iron binding capacity

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003489.htm Total iron binding capacity To use the sharing features on this page, please enable JavaScript. Total iron binding capacity (TIBC) is a blood test to ...

  14. Combining Teacher Assessment Scores with External Examination ...

    African Journals Online (AJOL)

    Combining Teacher Assessment Scores with External Examination Scores for Certification: Comparative Study of Four Statistical Models. ... University entrance examination scores in mathematics were obtained for a subsample of 115 ...

  15. Scoring System Improvements to Three Leadership Predictors

    National Research Council Canada - National Science Library

    Dela

    1997-01-01

    .... The modified scoring systems were evaluated by rescoring responses randomly selected from the sample which had been scored according to the scoring systems originally developed for the leadership research...

  16. Information filtering via weighted heat conduction algorithm

    Science.gov (United States)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  17. Total well dominated trees

    DEFF Research Database (Denmark)

    Finbow, Arthur; Frendrup, Allan; Vestergaard, Preben D.

    cardinality then G is a total well dominated graph. In this paper we study composition and decomposition of total well dominated trees. By a reversible process we prove that any total well dominated tree can both be reduced to and constructed from a family of three small trees....

  18. Interpreting force concept inventory scores: Normalized gain and SAT scores

    Directory of Open Access Journals (Sweden)

    Jeffrey J. Steinert

    2007-05-01

    Full Text Available Preinstruction SAT scores and normalized gains (G on the force concept inventory (FCI were examined for individual students in interactive engagement (IE courses in introductory mechanics at one high school (N=335 and one university (N=292 , and strong, positive correlations were found for both populations ( r=0.57 and r=0.46 , respectively. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson’s Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students’ cognitive skills, as measured either by the SAT or by Lawson’s test. While Lawson’s test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students’ reasoning abilities. Knowing the students’ cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students’ cognitive level.

  19. Interpreting force concept inventory scores: Normalized gain and SAT scores

    Directory of Open Access Journals (Sweden)

    Vincent P. Coletta

    2007-05-01

    Full Text Available Preinstruction SAT scores and normalized gains (G on the force concept inventory (FCI were examined for individual students in interactive engagement (IE courses in introductory mechanics at one high school (N=335 and one university (N=292, and strong, positive correlations were found for both populations (r=0.57 and r=0.46, respectively. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson’s Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students’ cognitive skills, as measured either by the SAT or by Lawson’s test. While Lawson’s test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students’ reasoning abilities. Knowing the students’ cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students’ cognitive level.

  20. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  1. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  2. Comparison of scoring approaches for the NEI VFQ-25 in low vision.

    Science.gov (United States)

    Dougherty, Bradley E; Bullimore, Mark A

    2010-08-01

    The aim of this study was to evaluate different approaches to scoring the National Eye Institute Visual Functioning Questionnaire-25 (NEI VFQ-25) in patients with low vision including scoring by the standard method, by Rasch analysis, and by use of an algorithm created by Massof to approximate Rasch person measure. Subscale validity and use of a 7-item short form instrument proposed by Ryan et al. were also investigated. NEI VFQ-25 data from 50 patients with low vision were analyzed using the standard method of summing Likert-type scores and calculating an overall average, Rasch analysis using Winsteps software, and the Massof algorithm in Excel. Correlations between scores were calculated. Rasch person separation reliability and other indicators were calculated to determine the validity of the subscales and of the 7-item instrument. Scores calculated using all three methods were highly correlated, but evidence of floor and ceiling effects was found with the standard scoring method. None of the subscales investigated proved valid. The 7-item instrument showed acceptable person separation reliability and good targeting and item performance. Although standard scores and Rasch scores are highly correlated, Rasch analysis has the advantages of eliminating floor and ceiling effects and producing interval-scaled data. The Massof algorithm for approximation of the Rasch person measure performed well in this group of low-vision patients. The validity of the subscales VFQ-25 should be reconsidered.

  3. Ganga hospital open injury score in management of open injuries.

    Science.gov (United States)

    Rajasekaran, S; Sabapathy, S R; Dheenadhayalan, J; Sundararajan, S R; Venkatramani, H; Devendra, A; Ramesh, P; Srikanth, K P

    2015-02-01

    Open injuries of the limbs offer challenges in management as there are still many grey zones in decision making regarding salvage, timing and type of reconstruction. As a result, there is still an unacceptable rate of secondary amputations which lead to tremendous waste of resources and psychological devastation of the patient and his family. Gustilo Anderson's classification was a major milestone in grading the severity of injury but however suffers from the disadvantages of imprecise definition, a poor interobserver correlation, inability to address the issue of salvage and inclusion of a wide spectrum of injuries in Type IIIb category. Numerous scores such as Mangled Extremity Severity Score, the Predictive Salvage Index, the Limb Salvage Index, Hannover Fracture Scale-97 etc have been proposed but all have the disadvantage of retrospective evaluation, inadequate sample sizes and poor sensitivity and specificity to amputation, especially in IIIb injuries. The Ganga Hospital Open Injury Score (GHOIS) was proposed in 2004 and is designed to specifically address the outcome in IIIb injuries of the tibia without vascular deficit. It evaluates the severity of injury to the three components of the limb--the skin, the bone and the musculotendinous structures separately on a grade from 0 to 5. Seven comorbid factors which influence the treatment and the outcome are included in the score with two marks each. The application of the total score and the individual tissue scores in management of IIIB injuries is discussed. The total score was shown to predict salvage when the value was 14 or less; amputation when the score was 17 and more. A grey zone of 15 and 16 is provided where the decision making had to be made on a case to case basis. The additional value of GHOIS was its ability to guide the timing and type of reconstruction. A skin score of more than 3 always required a flap and hence it indicated the need for an orthoplastic approach from the index procedure. Bone

  4. Blind Grid Scoring Record No. 290

    National Research Council Canada - National Science Library

    Overbay, Larry; Robitaille, George

    2005-01-01

    ...) utilizing the APG Standardized UXO Technology Demonstration Site Blind Grid. Scoring Records have been coordinated by Larry Overbay and the Standardized UXO Technology Demonstration Site Scoring Committee...

  5. Blind Grid Scoring Record No. 293

    National Research Council Canada - National Science Library

    Overbay, Larry; Robitaille, George; Archiable, Robert; Fling, Rick; McClung, Christina

    2005-01-01

    ...) utilizing the YPG Standardized UXO Technology Demonstration Site Blind Grid. Scoring Records have been coordinated by Larry Overbay and the Standardized UXO Technology Demonstration Site Scoring Committee...

  6. Open Field Scoring Record No. 298

    National Research Council Canada - National Science Library

    Overbay, Jr., Larry; Robitaille, George; Fling, Rick; McClung, Christina

    2005-01-01

    ...) utilizing the APG Standardized UXO Technology Demonstration Site Open Field. Scoring Records have been coordinated by Larry Overbay and the Standardized UXO Technology Demonstration Site Scoring Committee...

  7. Open Field Scoring Record No. 299

    National Research Council Canada - National Science Library

    Overbay, Larry; Robitaille, George

    2005-01-01

    ...) utilizing the YPG Standardized UXO Technology Demonstration Site Open Field. Scoring Records have been coordinated by Larry Overbay and the standardized UXO Technology Demonstration Site Scoring Committee...

  8. Evolving attractive faces using morphing technology and a genetic algorithm: a new approach to determining ideal facial aesthetics.

    Science.gov (United States)

    Wong, Brian J F; Karimi, Koohyar; Devcic, Zlatko; McLaren, Christine E; Chen, Wen-Pin

    2008-06-01

    The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Basic research study incorporating focus group evaluations. Digital images were acquired of 250 female volunteers (18-25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18-25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cos-metology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (+/-0.73), 5.50 (+/-0.62), 6.23 (+/-0.31), and 6.39 (+/-0.24) for P and F1-F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness scores. Multivariate analysis identified a

  9. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  10. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  11. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  12. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  13. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  14. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  15. Evaluation of an automated single-channel sleep staging algorithm

    Directory of Open Access Journals (Sweden)

    Wang Y

    2015-09-01

    , Total Deep Sleep, and Total REM. Results: Sensitivities of Z-PLUS compared to the PSG Consensus were 0.84 for Light Sleep, 0.74 for Deep Sleep, and 0.72 for REM. Similarly, positive predictive values were 0.85 for Light Sleep, 0.78 for Deep Sleep, and 0.73 for REM. Overall, kappa agreement of 0.72 is indicative of substantial agreement. Conclusion: This study demonstrates that Z-PLUS can automatically assess sleep stage using a single A1–A2 EEG channel when compared to the sleep stage scoring by a consensus of polysomnographic technologists. Our findings suggest that Z-PLUS may be used in conjunction with Z-ALG for single-channel EEG-based sleep staging. Keywords: EEG, sleep staging, algorithm, Zmachine, automatic sleep scoring, sleep detection, single channel

  16. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  17. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  18. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  19. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  20. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  1. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  2. Total Lightning as an Indicator of Mesocyclone Behavior

    Science.gov (United States)

    Stough, Sarah M.; Carey, Lawrence D.; Schultz, Christopher J.

    2014-01-01

    Apparent relationship between total lightning (in-cloud and cloud to ground) and severe weather suggests its operational utility. Goal of fusion of total lightning with proven tools (i.e., radar lightning algorithms. Preliminary work here investigates circulation from Weather Suveilance Radar- 1988 Doppler (WSR-88D) coupled with total lightning data from Lightning Mapping Arrays.

  3. Automated essay scoring and the future of educational assessment in medical education.

    Science.gov (United States)

    Gierl, Mark J; Latifi, Syed; Lai, Hollis; Boulais, André-Philippe; De Champlain, André

    2014-10-01

    Constructed-response tasks, which range from short-answer tests to essay questions, are included in assessments of medical knowledge because they allow educators to measure students' ability to think, reason, solve complex problems, communicate and collaborate through their use of writing. However, constructed-response tasks are also costly to administer and challenging to score because they rely on human raters. One alternative to the manual scoring process is to integrate computer technology with writing assessment. The process of scoring written responses using computer programs is known as 'automated essay scoring' (AES). An AES system uses a computer program that builds a scoring model by extracting linguistic features from a constructed-response prompt that has been pre-scored by human raters and then, using machine learning algorithms, maps the linguistic features to the human scores so that the computer can be used to classify (i.e. score or grade) the responses of a new group of students. The accuracy of the score classification can be evaluated using different measures of agreement. Automated essay scoring provides a method for scoring constructed-response tests that complements the current use of selected-response testing in medical education. The method can serve medical educators by providing the summative scores required for high-stakes testing. It can also serve medical students by providing them with detailed feedback as part of a formative assessment process. Automated essay scoring systems yield scores that consistently agree with those of human raters at a level as high, if not higher, as the level of agreement among human raters themselves. The system offers medical educators many benefits for scoring constructed-response tasks, such as improving the consistency of scoring, reducing the time required for scoring and reporting, minimising the costs of scoring, and providing students with immediate feedback on constructed-response tasks. © 2014

  4. Translation and validation of the new version of the Knee Society Score - The 2011 KS Score - into Brazilian Portuguese.

    Science.gov (United States)

    Silva, Adriana Lucia Pastore E; Croci, Alberto Tesconi; Gobbi, Riccardo Gomes; Hinckel, Betina Bremer; Pecora, José Ricardo; Demange, Marco Kawamura

    2017-01-01

    Translation, cultural adaptation, and validation of the new version of the Knee Society Score - The 2011 KS Score - into Brazilian Portuguese and verification of its measurement properties, reproducibility, and validity. In 2012, the new version of the Knee Society Score was developed and validated. This scale comprises four separate subscales: (a) objective knee score (seven items: 100 points); (b) patient satisfaction score (five items: 40 points); (c) patient expectations score (three items: 15 points); and (d) functional activity score (19 items: 100 points). A total of 90 patients aged 55-85 years were evaluated in a clinical cross-sectional study. The pre-operative translated version was applied to patients with TKA referral, and the post-operative translated version was applied to patients who underwent TKA. Each patient answered the same questionnaire twice and was evaluated by two experts in orthopedic knee surgery. Evaluations were performed pre-operatively and three, six, or 12 months post-operatively. The reliability of the questionnaire was evaluated using the intraclass correlation coefficient (ICC) between the two applications. Internal consistency was evaluated using Cronbach's alpha. The ICC found no difference between the means of the pre-operative, three-month, and six-month post-operative evaluations between sub-scale items. The Brazilian Portuguese version of The 2011 KS Score is a valid and reliable instrument for objective and subjective evaluation of the functionality of Brazilian patients who undergo TKA and revision TKA.

  5. The BRICS (Bronchiectasis Radiologically Indexed CT Score): A Multicenter Study Score for Use in Idiopathic and Postinfective Bronchiectasis.

    Science.gov (United States)

    Bedi, Pallavi; Chalmers, James D; Goeminne, Pieter C; Mai, Cindy; Saravanamuthu, Pira; Velu, Prasad Palani; Cartlidge, Manjit K; Loebinger, Michael R; Jacob, Joe; Kamal, Faisal; Schembri, Nicola; Aliberti, Stefano; Hill, Uta; Harrison, Mike; Johnson, Christopher; Screaton, Nicholas; Haworth, Charles; Polverino, Eva; Rosales, Edmundo; Torres, Antoni; Benegas, Michael N; Rossi, Adriano G; Patel, Dilip; Hill, Adam T

    2018-05-01

    The goal of this study was to develop a simplified radiological score that could assess clinical disease severity in bronchiectasis. The Bronchiectasis Radiologically Indexed CT Score (BRICS) was devised based on a multivariable analysis of the Bhalla score and its ability in predicting clinical parameters of severity. The score was then externally validated in six centers in 302 patients. A total of 184 high-resolution CT scans were scored for the validation cohort. In a multiple logistic regression model, disease severity markers significantly associated with the Bhalla score were percent predicted FEV 1 , sputum purulence, and exacerbations requiring hospital admission. Components of the Bhalla score that were significantly associated with the disease severity markers were bronchial dilatation and number of bronchopulmonary segments with emphysema. The BRICS was developed with these two parameters. The receiver operating-characteristic curve values for BRICS in the derivation cohort were 0.79 for percent predicted FEV 1 , 0.71 for sputum purulence, and 0.75 for hospital admissions per year; these values were 0.81, 0.70, and 0.70, respectively, in the validation cohort. Sputum free neutrophil elastase activity was significantly elevated in the group with emphysema on CT imaging. A simplified CT scoring system can be used as an adjunct to clinical parameters to predict disease severity in patients with idiopathic and postinfective bronchiectasis. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  6. Interval Coded Scoring: a toolbox for interpretable scoring systems

    Directory of Open Access Journals (Sweden)

    Lieven Billiet

    2018-04-01

    Full Text Available Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data. Interval Coded Scoring (ICS connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a binary classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code. ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge. Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains. The validity and applicability

  7. A Motion Estimation Algorithm Using DTCWT and ARPS

    Directory of Open Access Journals (Sweden)

    Unan Y. Oktiawati

    2013-09-01

    Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

  8. Total least squares for anomalous change detection

    Science.gov (United States)

    Theiler, James; Matsekh, Anna M.

    2010-04-01

    A family of subtraction-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQbased anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and special cases of it are equivalent to canonical correlation analysis and optimized covariance equalization. What whitened TLSQ offers is a generalization of these algorithms with the potential for better performance.

  9. The Effectiveness of Neurofeedback Training in Algorithmic Thinking Skills Enhancement.

    Science.gov (United States)

    Plerou, Antonia; Vlamos, Panayiotis; Triantafillidis, Chris

    2017-01-01

    Although research on learning difficulties are overall in an advanced stage, studies related to algorithmic thinking difficulties are limited, since interest in this field has been recently raised. In this paper, an interactive evaluation screener enhanced with neurofeedback elements, referring to algorithmic tasks solving evaluation, is proposed. The effect of HCI, color, narration and neurofeedback elements effect was evaluated in the case of algorithmic tasks assessment. Results suggest the enhanced performance in the case of neurofeedback trained group in terms of total correct and optimal algorithmic tasks solution. Furthermore, findings suggest that skills, concerning the way that an algorithm is conceived, designed, applied and evaluated are essentially improved.

  10. Diagnostic performance of line-immunoassay based algorithms for incident HIV-1 infection

    Directory of Open Access Journals (Sweden)

    Schüpbach Jörg

    2012-04-01

    Full Text Available Abstract Background Serologic testing algorithms for recent HIV seroconversion (STARHS provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident ( Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and

  11. High-Throughput Scoring of Seed Germination.

    Science.gov (United States)

    Ligterink, Wilco; Hilhorst, Henk W M

    2017-01-01

    High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very informative as it lacks information about start, rate, and uniformity of germination, which are highly indicative of such traits as dormancy, stress tolerance, and seed longevity. The calculation of cumulative germination curves requires information about germination percentage at various time points. We developed the GERMINATOR package: a simple, highly cost-efficient, and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The GERMINATOR package contains three modules: (I) design of experimental setup with various options to replicate and randomize samples; (II) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (III) curve fitting of cumulative germination data and the extraction, recap, and visualization of the various germination parameters. GERMINATOR is a freely available package that allows the monitoring and analysis of several thousands of germination tests, several times a day by a single person.

  12. The Rectal Cancer Female Sexuality Score

    DEFF Research Database (Denmark)

    Thyø, Anne; Emmertsen, Katrine J; Laurberg, Søren

    2018-01-01

    BACKGROUND: Sexual dysfunction and impaired quality of life is a potential side effect to rectal cancer treatment. OBJECTIVE: The objective of this study was to develop and validate a simple scoring system intended to evaluate sexual function in women treated for rectal cancer. DESIGN......: This is a population-based cross-sectional study. SETTINGS: Female patients diagnosed with rectal cancer between 2001 and 2014 were identified by using the Danish Colorectal Cancer Group's database. Participants filled in the validated Sexual Function Vaginal Changes questionnaire. Women declared to be sexually active...... in the validation group. PATIENTS: Female patients with rectal cancer above the age of 18 who underwent abdominoperineal resection, Hartmann procedure, or total/partial mesorectal excision were selected. MAIN OUTCOME MEASURES: The primary outcome measured was the quality of life that was negatively affected because...

  13. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  14. Exploring a Source of Uneven Score Equity across the Test Score Range

    Science.gov (United States)

    Huggins-Manley, Anne Corinne; Qiu, Yuxi; Penfield, Randall D.

    2018-01-01

    Score equity assessment (SEA) refers to an examination of population invariance of equating across two or more subpopulations of test examinees. Previous SEA studies have shown that score equity may be present for examinees scoring at particular test score ranges but absent for examinees scoring at other score ranges. No studies to date have…

  15. Association between sleep stages and hunger scores in 36 children.

    Science.gov (United States)

    Arun, R; Pina, P; Rubin, D; Erichsen, D

    2016-10-01

    Childhood obesity is a growing health challenge. Recent studies show that children with late bedtime and late awakening are more obese independent of total sleep time. In adolescents and adults, a delayed sleep phase has been associated with higher caloric intake. Furthermore, an adult study showed a positive correlation between REM sleep and energy balance. This relationship has not been demonstrated in children. However, it may be important as a delayed sleep phase would increase the proportion of REM sleep. This study investigated the relationship between hunger score and sleep physiology in a paediatric population. Thirty-six patients referred for a polysomnogram for suspected obstructive sleep apnoea were enrolled in the study. Sleep stages were recorded as part of the polysomnogram. Hunger scores were obtained using a visual analogue scale. Mean age was 9.6 ± 3.5 years. Mean hunger scores were 2.07 ± 2.78. Hunger scores were positively correlated with percentage of total rapid eye movement (REM) sleep (r = 0.438, P hunger score (r = -0.360, P hunger scores. These findings suggest that delayed bedtime, which increases the proportion of REM sleep and decreases the proportion of SWS, results in higher hunger levels in children. © 2015 World Obesity.

  16. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  17. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  18. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  19. Total Quality Leadership

    Science.gov (United States)

    1991-01-01

    More than 750 NASA, government, contractor, and academic representatives attended the Seventh Annual NASA/Contractors Conference on Quality and Productivity. The panel presentations and Keynote speeches revolving around the theme of total quality leadership provided a solid base of understanding of the importance, benefits, and principles of total quality management (TQM). The presentations from the conference are summarized.

  20. Genoptraening efter total knaealloplastik

    DEFF Research Database (Denmark)

    Holm, Bente; Kehlet, Henrik

    2009-01-01

    The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty...

  1. Total dose meter development

    International Nuclear Information System (INIS)

    Brackenbush, L.W.

    1986-09-01

    This report describes an alarming ''pocket'' monitor/dosimeter, based on a tissue-equivalent proportional counter, that measure both neutron and gamma dose and determines dose equivalent for the mixed radiation field. This report details the operation of the device and provides information on: the necessity for a device to measure dose equivalent in mixed radiation fields; the mathematical theory required to determine dose equivalent from tissue equivalent proportional; the detailed electronic circuits required; the algorithms required in the microprocessor used to calculate dose equivalent; the features of the instrument; program accomplishments and future plans

  2. Evaluation of Scoring Skills and Non Scoring Skills in the Brazilian SuperLeague Women’s Volleyball

    Directory of Open Access Journals (Sweden)

    Aluizio Otávio Gouvêa Ferreira Oliveira

    2016-09-01

    Full Text Available This study analyzed all the games (n=253 from the 2011/2012 and 2012/2013 Seasons of Brazilian SuperLeague Women’s Volleyball, to identify the game-related factors that discriminate in favor of winning and losing teams. In the 2011/2012 Season, the Total Shares Setting (TAL and Total Points Attack (TPA were factors that discriminated in favor of a defeat. The factors that determined the victory were the Total Shares Serve (TAS, Total Shares Defense (TAD, Total Shares Reception (TAR and Total Defense Excellent (TDE. In the 2012/2013 Season, the factor (TAD most often discriminated in favor of victory and the factor that led to defeat was the Total Points Made (TPF. The scoring skills (TPA and (TPF discriminated against the final outcome of the game, but surprisingly are associated with defeat and the (TAS supposed to victory. The non-scoring skills (TAD, (TAR and (TDE discriminate the end result of the game and this may be associated with the victory. The non-scoring skill (TAL determines the outcome of the game and is supposedly associated with the defeat.

  3. The ERICE-score: the new native cardiovascular score for the low-risk and aged Mediterranean population of Spain.

    Science.gov (United States)

    Gabriel, Rafael; Brotons, Carlos; Tormo, M José; Segura, Antonio; Rigo, Fernando; Elosua, Roberto; Carbayo, Julio A; Gavrila, Diana; Moral, Irene; Tuomilehto, Jaakko; Muñiz, Javier

    2015-03-01

    In Spain, data based on large population-based cohorts adequate to provide an accurate prediction of cardiovascular risk have been scarce. Thus, calibration of the EuroSCORE and Framingham scores has been proposed and done for our population. The aim was to develop a native risk prediction score to accurately estimate the individual cardiovascular risk in the Spanish population. Seven Spanish population-based cohorts including middle-aged and elderly participants were assembled. There were 11800 people (6387 women) representing 107915 person-years of follow-up. A total of 1214 cardiovascular events were identified, of which 633 were fatal. Cox regression analyses were conducted to examine the contributions of the different variables to the 10-year total cardiovascular risk. Age was the strongest cardiovascular risk factor. High systolic blood pressure, diabetes mellitus and smoking were strong predictive factors. The contribution of serum total cholesterol was small. Antihypertensive treatment also had a significant impact on cardiovascular risk, greater in men than in women. The model showed a good discriminative power (C-statistic=0.789 in men and C=0.816 in women). Ten-year risk estimations are displayed graphically in risk charts separately for men and women. The ERICE is a new native cardiovascular risk score for the Spanish population derived from the background and contemporaneous risk of several Spanish cohorts. The ERICE score offers the direct and reliable estimation of total cardiovascular risk, taking in consideration the effect of diabetes mellitus and cardiovascular risk factor management. The ERICE score is a practical and useful tool for clinicians to estimate the total individual cardiovascular risk in Spain. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  4. Benchmarking homogenization algorithms for monthly data

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  5. An efficient grid layout algorithm for biological networks utilizing various biological attributes

    Directory of Open Access Journals (Sweden)

    Kato Mitsuru

    2007-03-01

    Full Text Available Abstract Background Clearly visualized biopathways provide a great help in understanding biological systems. However, manual drawing of large-scale biopathways is time consuming. We proposed a grid layout algorithm that can handle gene-regulatory networks and signal transduction pathways by considering edge-edge crossing, node-edge crossing, distance measure between nodes, and subcellular localization information from Gene Ontology. Consequently, the layout algorithm succeeded in drastically reducing these crossings in the apoptosis model. However, for larger-scale networks, we encountered three problems: (i the initial layout is often very far from any local optimum because nodes are initially placed at random, (ii from a biological viewpoint, human layouts still exceed automatic layouts in understanding because except subcellular localization, it does not fully utilize biological information of pathways, and (iii it employs a local search strategy in which the neighborhood is obtained by moving one node at each step, and automatic layouts suggest that simultaneous movements of multiple nodes are necessary for better layouts, while such extension may face worsening the time complexity. Results We propose a new grid layout algorithm. To address problem (i, we devised a new force-directed algorithm whose output is suitable as the initial layout. For (ii, we considered that an appropriate alignment of nodes having the same biological attribute is one of the most important factors of the comprehension, and we defined a new score function that gives an advantage to such configurations. For solving problem (iii, we developed a search strategy that considers swapping nodes as well as moving a node, while keeping the order of the time complexity. Though a naïve implementation increases by one order, the time complexity, we solved this difficulty by devising a method that caches differences between scores of a layout and its possible updates

  6. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  7. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  8. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  9. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  10. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  11. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  12. Laparoscopic total pancreatectomy

    Science.gov (United States)

    Wang, Xin; Li, Yongbin; Cai, Yunqiang; Liu, Xubao; Peng, Bing

    2017-01-01

    Abstract Rationale: Laparoscopic total pancreatectomy is a complicated surgical procedure and rarely been reported. This study was conducted to investigate the safety and feasibility of laparoscopic total pancreatectomy. Patients and Methods: Three patients underwent laparoscopic total pancreatectomy between May 2014 and August 2015. We reviewed their general demographic data, perioperative details, and short-term outcomes. General morbidity was assessed using Clavien–Dindo classification and delayed gastric emptying (DGE) was evaluated by International Study Group of Pancreatic Surgery (ISGPS) definition. Diagnosis and Outcomes: The indications for laparoscopic total pancreatectomy were intraductal papillary mucinous neoplasm (IPMN) (n = 2) and pancreatic neuroendocrine tumor (PNET) (n = 1). All patients underwent laparoscopic pylorus and spleen-preserving total pancreatectomy, the mean operative time was 490 minutes (range 450–540 minutes), the mean estimated blood loss was 266 mL (range 100–400 minutes); 2 patients suffered from postoperative complication. All the patients recovered uneventfully with conservative treatment and discharged with a mean hospital stay 18 days (range 8–24 days). The short-term (from 108 to 600 days) follow up demonstrated 3 patients had normal and consistent glycated hemoglobin (HbA1c) level with acceptable quality of life. Lessons: Laparoscopic total pancreatectomy is feasible and safe in selected patients and pylorus and spleen preserving technique should be considered. Further prospective randomized studies are needed to obtain a comprehensive understanding the role of laparoscopic technique in total pancreatectomy. PMID:28099344

  13. Quasi-supervised scoring of human sleep in polysomnograms using augmented input variables.

    Science.gov (United States)

    Yaghouby, Farid; Sunderam, Sridhar

    2015-04-01

    The limitations of manual sleep scoring make computerized methods highly desirable. Scoring errors can arise from human rater uncertainty or inter-rater variability. Sleep scoring algorithms either come as supervised classifiers that need scored samples of each state to be trained, or as unsupervised classifiers that use heuristics or structural clues in unscored data to define states. We propose a quasi-supervised classifier that models observations in an unsupervised manner but mimics a human rater wherever training scores are available. EEG, EMG, and EOG features were extracted in 30s epochs from human-scored polysomnograms recorded from 42 healthy human subjects (18-79 years) and archived in an anonymized, publicly accessible database. Hypnograms were modified so that: 1. Some states are scored but not others; 2. Samples of all states are scored but not for transitional epochs; and 3. Two raters with 67% agreement are simulated. A framework for quasi-supervised classification was devised in which unsupervised statistical models-specifically Gaussian mixtures and hidden Markov models--are estimated from unlabeled training data, but the training samples are augmented with variables whose values depend on available scores. Classifiers were fitted to signal features incorporating partial scores, and used to predict scores for complete recordings. Performance was assessed using Cohen's Κ statistic. The quasi-supervised classifier performed significantly better than an unsupervised model and sometimes as well as a completely supervised model despite receiving only partial scores. The quasi-supervised algorithm addresses the need for classifiers that mimic scoring patterns of human raters while compensating for their limitations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Dutch validation of the low anterior resection syndrome score.

    Science.gov (United States)

    Hupkens, B J P; Breukink, S O; Olde Reuver Of Briel, C; Tanis, P J; de Noo, M E; van Duijvendijk, P; van Westreenen, H L; Dekker, J W T; Chen, T Y T; Juul, T

    2018-04-21

    The aim of this study was to validate the Dutch translation of the low anterior resection syndrome (LARS) score in a population of Dutch rectal cancer patients. Patients who underwent surgery for rectal cancer received the LARS score questionnaire, a single quality of life (QoL) category question and the European Organization for Research and Treatment of Cancer (EORTC) QLQ-C30 questionnaire. A subgroup of patients received the LARS score twice to assess the test-retest reliability. A total of 165 patients were included in the analysis, identified in six Dutch centres. The response rate was 62.0%. The percentage of patients who reported 'major LARS' was 59.4%. There was a high proportion of patients with a perfect or moderate fit between the QoL category question and the LARS score, showing a good convergent validity. The LARS score was able to discriminate between patients with or without neoadjuvant radiotherapy (P = 0.003), between total and partial mesorectal excision (P = 0.008) and between age groups (P = 0.039). There was a statistically significant association between a higher LARS score and an impaired function on the global QoL subscale and the physical, role, emotional and social functioning subscales of the EORTC QLQ-C30 questionnaire. The test-retest reliability of the LARS score was good, with an interclass correlation coefficient of 0.79. The good psychometric properties of the Dutch version of the LARS score are comparable overall to the earlier validations in other countries. Therefore, the Dutch translation can be considered to be a valid tool for assessing LARS in Dutch rectal cancer patients. Colorectal Disease © 2018 The Association of Coloproctology of Great Britain and Ireland.

  15. Development and Validation of a Diabetic Retinopathy Referral Algorithm Based on Single-Field Fundus Photography.

    Directory of Open Access Journals (Sweden)

    Sangeetha Srinivasan

    Full Text Available To develop a simplified algorithm to identify and refer diabetic retinopathy (DR from single-field retinal images specifically for sight-threatening diabetic retinopathy for appropriate care (ii to determine the agreement and diagnostic accuracy of the algorithm as a pilot study among optometrists versus "gold standard" (retinal specialist grading.The severity of DR was scored based on colour photo using a colour coded algorithm, which included the lesions of DR and number of quadrants involved. A total of 99 participants underwent training followed by evaluation. Data of the 99 participants were analyzed. Fifty posterior pole 45 degree retinal images with all stages of DR were presented. Kappa scores (κ, areas under the receiver operating characteristic curves (AUCs, sensitivity and specificity were determined, with further comparison between working optometrists and optometry students.Mean age of the participants was 22 years (range: 19-43 years, 87% being women. Participants correctly identified 91.5% images that required immediate referral (κ = 0.696, 62.5% of images as requiring review after 6 months (κ = 0.462, and 51.2% of those requiring review after 1 year (κ = 0.532. The sensitivity and specificity of the optometrists were 91% and 78% for immediate referral, 62% and 84% for review after 6 months, and 51% and 95% for review after 1 year, respectively. The AUC was the highest (0.855 for immediate referral, second highest (0.824 for review after 1 year, and 0.727 for review after 6 months criteria. Optometry students performed better than the working optometrists for all grades of referral.The diabetic retinopathy algorithm assessed in this work is a simple and a fairly accurate method for appropriate referral based on single-field 45 degree posterior pole retinal images.

  16. Estonian total ozone climatology

    Directory of Open Access Journals (Sweden)

    K. Eerme

    Full Text Available The climatological characteristics of total ozone over Estonia based on the Total Ozone Mapping Spectrometer (TOMS data are discussed. The mean annual cycle during 1979–2000 for the site at 58.3° N and 26.5° E is compiled. The available ground-level data interpolated before TOMS, have been used for trend detection. During the last two decades, the quasi-biennial oscillation (QBO corrected systematic decrease of total ozone from February–April was 3 ± 2.6% per decade. Before 1980, a spring decrease was not detectable. No decreasing trend was found in either the late autumn ozone minimum or in the summer total ozone. The QBO related signal in the spring total ozone has an amplitude of ± 20 DU and phase lag of 20 months. Between 1987–1992, the lagged covariance between the Singapore wind and the studied total ozone was weak. The spring (April–May and summer (June–August total ozone have the best correlation (coefficient 0.7 in the yearly cycle. The correlation between the May and August total ozone is higher than the one between the other summer months. Seasonal power spectra of the total ozone variance show preferred periods with an over 95% significance level. Since 1986, during the winter/spring, the contribution period of 32 days prevails instead of the earlier dominating 26 days. The spectral densities of the periods from 4 days to 2 weeks exhibit high interannual variability.

    Key words. Atmospheric composition and structure (middle atmosphere – composition and chemistry; volcanic effects – Meteorology and atmospheric dynamics (climatology

  17. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  18. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  19. Total photon absorption

    International Nuclear Information System (INIS)

    Carlos, P.

    1985-06-01

    The present discussion is limited to a presentation of the most recent total photonuclear absorption experiments performed with real photons at intermediate energy, and more precisely in the region of nucleon resonances. The main sources of real photons are briefly reviewed and the experimental procedures used for total photonuclear absorption cross section measurements. The main results obtained below 140 MeV photon energy as well as above 2 GeV are recalled. The experimental study of total photonuclear absorption in the nuclear resonance region (140 MeV< E<2 GeV) is still at its beginning and some results are presented

  20. [Total artificial heart].

    Science.gov (United States)

    Antretter, H; Dumfarth, J; Höfer, D

    2015-09-01

    To date the CardioWest™ total artificial heart is the only clinically available implantable biventricular mechanical replacement for irreversible cardiac failure. This article presents the indications, contraindications, implantation procedere and postoperative treatment. In addition to a overview of the applications of the total artificial heart this article gives a brief presentation of the two patients treated in our department with the CardioWest™. The clinical course, postoperative rehabilitation, device-related complications and control mechanisms are presented. The total artificial heart is a reliable implant for treating critically ill patients with irreversible cardiogenic shock. A bridge to transplantation is feasible with excellent results.

  1. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  2. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  3. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  4. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  5. Spinal appearance questionnaire: factor analysis, scoring, reliability, and validity testing.

    Science.gov (United States)

    Carreon, Leah Y; Sanders, James O; Polly, David W; Sucato, Daniel J; Parent, Stefan; Roy-Beaudry, Marjolaine; Hopkins, Jeffrey; McClung, Anna; Bratcher, Kelly R; Diamond, Beverly E

    2011-08-15

    Cross sectional. This study presents the factor analysis of the Spinal Appearance Questionnaire (SAQ) and its psychometric properties. Although the SAQ has been administered to a large sample of patients with adolescent idiopathic scoliosis (AIS) treated surgically, its psychometric properties have not been fully evaluated. This study presents the factor analysis and scoring of the SAQ and evaluates its psychometric properties. The SAQ and the Scoliosis Research Society-22 (SRS-22) were administered to AIS patients who were being observed, braced or scheduled for surgery. Standard demographic data and radiographic measures including Lenke type and curve magnitude were also collected. Of the 1802 patients, 83% were female; with a mean age of 14.8 years and mean initial Cobb angle of 55.8° (range, 0°-123°). From the 32 items of the SAQ, 15 loaded on two factors with consistent and significant correlations across all Lenke types. There is an Appearance (items 1-10) and an Expectations factor (items 12-15). Responses are summed giving a range of 5 to 50 for the Appearance domain and 5 to 20 for the Expectations domain. The Cronbach's α was 0.88 for both domains and Total score with a test-retest reliability of 0.81 for Appearance and 0.91 for Expectations. Correlations with major curve magnitude were higher for the SAQ Appearance and SAQ Total scores compared to correlations between the SRS Appearance and SRS Total scores. The SAQ and SRS-22 Scores were statistically significantly different in patients who were scheduled for surgery compared to those who were observed or braced. The SAQ is a valid measure of self-image in patients with AIS with greater correlation to curve magnitude than SRS Appearance and Total score. It also discriminates between patients who require surgery from those who do not.

  6. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  7. Total 2004 results

    International Nuclear Information System (INIS)

    2005-02-01

    This document presents the 2004 results of Total Group: consolidated account, special items, number of shares, market environment, adjustment for amortization of Sanofi-Aventis merger-related intangibles, 4. quarter 2004 results (operating and net incomes, cash flow), upstream (results, production, reserves, recent highlights), downstream (results, refinery throughput, recent highlights), chemicals (results, recent highlights), Total's full year 2004 results (operating and net income, cash flow), 2005 sensitivities, Total SA parent company accounts and proposed dividend, adoption of IFRS accounting, summary and outlook, main operating information by segment for the 4. quarter and full year 2004: upstream (combined liquids and gas production by region, liquids production by region, gas production by region), downstream (refined product sales by region, chemicals), Total financial statements: consolidated statement of income, consolidated balance sheet (assets, liabilities and shareholder's equity), consolidated statements of cash flows, business segments information. (J.S.)

  8. Total synthesis of ciguatoxin.

    Science.gov (United States)

    Hamajima, Akinari; Isobe, Minoru

    2009-01-01

    Something fishy: Ciguatoxin (see structure) is one of the principal toxins involved in ciguatera poisoning and the target of a total synthesis involving the coupling of three segments. The key transformations in this synthesis feature acetylene-dicobalthexacarbonyl complexation.

  9. Total 2004 results

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-02-01

    This document presents the 2004 results of Total Group: consolidated account, special items, number of shares, market environment, adjustment for amortization of Sanofi-Aventis merger-related intangibles, 4. quarter 2004 results (operating and net incomes, cash flow), upstream (results, production, reserves, recent highlights), downstream (results, refinery throughput, recent highlights), chemicals (results, recent highlights), Total's full year 2004 results (operating and net income, cash flow), 2005 sensitivities, Total SA parent company accounts and proposed dividend, adoption of IFRS accounting, summary and outlook, main operating information by segment for the 4. quarter and full year 2004: upstream (combined liquids and gas production by region, liquids production by region, gas production by region), downstream (refined product sales by region, chemicals), Total financial statements: consolidated statement of income, consolidated balance sheet (assets, liabilities and shareholder's equity), consolidated statements of cash flows, business segments information. (J.S.)

  10. Genoptraening efter total knaealloplastik

    DEFF Research Database (Denmark)

    Holm, Bente; Kehlet, Henrik

    2009-01-01

    The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty rehabilitat......The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty...... rehabilitation. Since hospital stay duration has decreased considerably, the need for post-discharge physiotherapy may also have changed. Thus, the indication for and types of rehabilitation programmes need to be studied within the context of fast-track knee arthroplasty....

  11. Genoptraening efter total knaealloplastik

    DEFF Research Database (Denmark)

    Holm, Bente; Kehlet, Henrik

    2009-01-01

    The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty rehabilitat......The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty...... rehabilitation. Since hospital stay duration has decreased considerably, the need for post-discharge physiotherapy may also have changed. Thus, the indication for and types of rehabilitation programmes need to be studied within the context of fast-track knee arthroplasty. Udgivelsesdato: 2009-Feb-23...

  12. Isothermal Gravitational Segregation: Algorithms and Specifications

    DEFF Research Database (Denmark)

    Halldórsson, Snorri; Stenby, Erling Halfdan

    2000-01-01

    New algorithms for calculating the isothermal equilibrium state of reservoir fluids under the influence of gravity are presented. Two types of specifications are considered: the specification of pressure and composition at a reference depth; and the specification of the total overall content of t...

  13. Supravaginal eller total hysterektomi?

    DEFF Research Database (Denmark)

    Edvardsen, L; Madsen, E M

    1994-01-01

    There has been a decline in the rate of hysterectomies in Denmark in general over the last thirteen years, together with a rise in the number of supravaginal operations over the last two years. The literature concerning the relative merits of the supravaginal and the total abdominal operation is ...... indicate a reduced frequency of orgasm after the total hysterectomy compared with the supravaginal operation. When there are technical problems peroperatively with an increased urologic risk the supravaginal operation is recommended....

  14. The role of safe practices in hospitals’ total factor productivity

    Directory of Open Access Journals (Sweden)

    Timothy R Huerta

    2011-01-01

    Full Text Available Timothy R Huerta1, Mark A Thompson2, Eric W Ford31Center for Health Innovation, Education, and Research, 2Rawls College of Business, Texas Tech University, Lubbock, TX, USA; 3Forsyth Medical Center Distinguished Professor of Health Care, The University of North Carolina Greensboro, Greensboro, NC, USAAbstract: The dual aims of improving safety and productivity are a major part of the health care reform movement hospital leaders must manage. Studies exploring the two phenomena conjointly and over time are critical to understanding how change in one dimension influences the other over time. A Malmquist approach is used to assess hospitals’ relative productivity levels over time. Analysis of variance (ANOVA algorithms were executed to assess whether or not the Malmquist Indices (MIs correlate with the safe practices measure. The American Hospital Association’s annual survey and the Centers for Medicare and Medicaid Services’ Case Mix Index for fiscal years 2002–2006, along with Leapfrog Group’s annual survey for 2006 were used for this study. Leapfrog Group respondents have significantly higher technological change (TC and total factor productivity (TFP than nonrespondents without sacrificing technical efficiency changes. Of the three MIs, TC (P < 0.10 and TFP (P < 0.05 had significant relationships with the National Quality Forum’s Safe Practices score. The ANOVA also indicates that the mean differences of TFP measures progressed in a monotonic fashion up the Safe Practices scale. Adherence to the National Quality Forum’s Safe Practices recommendations had a major impact on hospitals’ operating processes and productivity. Specifically, there is evidence that hospitals reporting higher Safe Practices scores had above average levels of TC and TFP gains over the period assessed. Leaders should strive for increased transparency to promote both quality improvement and increased productivity.Keywords: safety, productivity, quality, safe

  15. Imaging Total Stations - Modular and Integrated Concepts

    Science.gov (United States)

    Hauth, Stefan; Schlüter, Martin

    2010-05-01

    Keywords: 3D-Metrology, Engineering Geodesy, Digital Image Processing Initialized in 2009, the Institute for Spatial Information and Surveying Technology i3mainz, Mainz University of Applied Sciences, forces research towards modular concepts for imaging total stations. On the one hand, this research is driven by the successful setup of high precision imaging motor theodolites in the near past, on the other hand it is pushed by the actual introduction of integrated imaging total stations to the positioning market by the manufacturers Topcon and Trimble. Modular concepts for imaging total stations are manufacturer independent to a large extent and consist of a particular combination of accessory hardware, software and algorithmic procedures. The hardware part consists mainly of an interchangeable eyepiece adapter offering opportunities for digital imaging and motorized focus control. An easy assembly and disassembly in the field is possible allowing the user to switch between the classical and the imaging use of a robotic total station. The software part primarily has to ensure hardware control, but several level of algorithmic support might be added and have to be distinguished. Algorithmic procedures allow to reach several levels of calibration concerning the geometry of the external digital camera and the total station. We deliver insight in our recent developments and quality characteristics. Both the modular and the integrated approach seem to have its individual strengths and weaknesses. Therefore we expect that both approaches might point at different target applications. Our aim is a better understanding of appropriate applications for robotic imaging total stations. First results are presented. Stefan Hauth, Martin Schlüter i3mainz - Institut für Raumbezogene Informations- und Messtechnik FH Mainz University of Applied Sciences Lucy-Hillebrand-Straße 2, 55128 Mainz, Germany

  16. Assessment of calcium scoring performance in cardiac computed tomography

    International Nuclear Information System (INIS)

    Ulzheimer, Stefan; Kalender, Willi A.

    2003-01-01

    Electron beam tomography (EBT) has been used for cardiac diagnosis and the quantitative assessment of coronary calcium since the late 1980s. The introduction of mechanical multi-slice spiral CT (MSCT) scanners with shorter rotation times opened new possibilities of cardiac imaging with conventional CT scanners. The purpose of this work was to qualitatively and quantitatively evaluate the performance for EBT and MSCT for the task of coronary artery calcium imaging as a function of acquisition protocol, heart rate, spiral reconstruction algorithm (where applicable) and calcium scoring method. A cardiac CT semi-anthropomorphic phantom was designed and manufactured for the investigation of all relevant image quality parameters in cardiac CT. This phantom includes various test objects, some of which can be moved within the anthropomorphic phantom in a manner that mimics realistic heart motion. These tools were used to qualitatively and quantitatively demonstrate the accuracy of coronary calcium imaging using typical protocols for an electron beam (Evolution C-150XP, Imatron, South San Francisco, Calif.) and a 0.5-s four-slice spiral CT scanner (Sensation 4, Siemens, Erlangen, Germany). A special focus was put on the method of quantifying coronary calcium, and three scoring systems were evaluated (Agatston, volume, and mass scoring). Good reproducibility in coronary calcium scoring is always the result of a combination of high temporal and spatial resolution; consequently, thin-slice protocols in combination with retrospective gating on MSCT scanners yielded the best results. The Agatston score was found to be the least reproducible scoring method. The hydroxyapatite mass, being better reproducible and comparable on different scanners and being a physical quantitative measure, appears to be the method of choice for future clinical studies. The hydroxyapatite mass is highly correlated to the Agatston score. The introduced phantoms can be used to quantitatively assess the

  17. Trilateral market coupling. Algorithm appendix

    International Nuclear Information System (INIS)

    2006-03-01

    Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the participants in

  18. Trilateral market coupling. Algorithm appendix

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-03-15

    Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the

  19. Total lymphoid irradiation

    International Nuclear Information System (INIS)

    Sutherland, D.E.; Ferguson, R.M.; Simmons, R.L.; Kim, T.H.; Slavin, S.; Najarian, J.S.

    1983-01-01

    Total lymphoid irradiation by itself can produce sufficient immunosuppression to prolong the survival of a variety of organ allografts in experimental animals. The degree of prolongation is dose-dependent and is limited by the toxicity that occurs with higher doses. Total lymphoid irradiation is more effective before transplantation than after, but when used after transplantation can be combined with pharmacologic immunosuppression to achieve a positive effect. In some animal models, total lymphoid irradiation induces an environment in which fully allogeneic bone marrow will engraft and induce permanent chimerism in the recipients who are then tolerant to organ allografts from the donor strain. If total lymphoid irradiation is ever to have clinical applicability on a large scale, it would seem that it would have to be under circumstances in which tolerance can be induced. However, in some animal models graft-versus-host disease occurs following bone marrow transplantation, and methods to obviate its occurrence probably will be needed if this approach is to be applied clinically. In recent years, patient and graft survival rates in renal allograft recipients treated with conventional immunosuppression have improved considerably, and thus the impetus to utilize total lymphoid irradiation for its immunosuppressive effect alone is less compelling. The future of total lymphoid irradiation probably lies in devising protocols in which maintenance immunosuppression can be eliminated, or nearly eliminated, altogether. Such protocols are effective in rodents. Whether they can be applied to clinical transplantation remains to be seen

  20. Totally optimal decision rules

    KAUST Repository

    Amin, Talha

    2017-11-22

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  1. Totally optimal decision rules

    KAUST Repository

    Amin, Talha M.; Moshkov, Mikhail

    2017-01-01

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  2. Gambling score in earthquake prediction analysis

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2011-03-01

    The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

  3. A new hybrid metaheuristic algorithm for wind farm micrositing

    International Nuclear Information System (INIS)

    Massan, S.U.R.; Wagan, A.I.; Shaikh, M.M.

    2017-01-01

    This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm) for the solution of the WTO (Wind Turbine Optimization) problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm) and the FA (Firefly Algorithm). The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm) used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together. (author)

  4. A New Hybrid Metaheuristic Algorithm for Wind Farm Micrositing

    Directory of Open Access Journals (Sweden)

    SHAFIQ-UR-REHMAN MASSAN

    2017-07-01

    Full Text Available This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm for the solution of the WTO (Wind Turbine Optimization problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm and the FA (Firefly Algorithm. The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together.

  5. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  6. Dynamic programming algorithms for biological sequence comparison.

    Science.gov (United States)

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  7. TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.

    Science.gov (United States)

    Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud

    2011-09-01

    Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.

  8. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  9. Reliable scar scoring system to assess photographs of burn patients.

    Science.gov (United States)

    Mecott, Gabriel A; Finnerty, Celeste C; Herndon, David N; Al-Mousawi, Ahmed M; Branski, Ludwik K; Hegde, Sachin; Kraft, Robert; Williams, Felicia N; Maldonado, Susana A; Rivero, Haidy G; Rodriguez-Escobar, Noe; Jeschke, Marc G

    2015-12-01

    Several scar-scoring scales exist to clinically monitor burn scar development and maturation. Although scoring scars through direct clinical examination is ideal, scars must sometimes be scored from photographs. No scar scale currently exists for the latter purpose. We modified a previously described scar scale (Yeong et al., J Burn Care Rehabil 1997) and tested the reliability of this new scale in assessing burn scars from photographs. The new scale consisted of three parameters as follows: scar height, surface appearance, and color mismatch. Each parameter was assigned a score of 1 (best) to 4 (worst), generating a total score of 3-12. Five physicians with burns training scored 120 representative photographs using the original and modified scales. Reliability was analyzed using coefficient of agreement, Cronbach alpha, intraclass correlation coefficient, variance, and coefficient of variance. Analysis of variance was performed using the Kruskal-Wallis test. Color mismatch and scar height scores were validated by analyzing actual height and color differences. The intraclass correlation coefficient, the coefficient of agreement, and Cronbach alpha were higher for the modified scale than those of the original scale. The original scale produced more variance than that in the modified scale. Subanalysis demonstrated that, for all categories, the modified scale had greater correlation and reliability than the original scale. The correlation between color mismatch scores and actual color differences was 0.84 and between scar height scores and actual height was 0.81. The modified scar scale is a simple, reliable, and useful scale for evaluating photographs of burn patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Correlated physical and mental health summary scores for the SF-36 and SF-12 Health Survey, V.1

    Directory of Open Access Journals (Sweden)

    Cunningham William E

    2007-09-01

    Full Text Available Abstract Background The SF-36 and SF-12 summary scores were derived using an uncorrelated (orthogonal factor solution. We estimate SF-36 and SF-12 summary scores using a correlated (oblique physical and mental health factor model. Methods We administered the SF-36 to 7,093 patients who received medical care from an independent association of 48 physician groups in the western United States. Correlated physical health (PCSc and mental health (MCSc scores were constructed by multiplying each SF-36 scale z-score by its respective scoring coefficient from the obliquely rotated two factor solution. PCSc-12 and MCSc-12 scores were estimated using an approach similar to the one used to derive the original SF-12 summary scores. Results The estimated correlation between SF-36 PCSc and MCSc scores was 0.62. There were far fewer negative factor scoring coefficients for the oblique factor solution compared to the factor scoring coefficients produced by the standard orthogonal factor solution. Similar results were found for PCSc-12, and MCSc-12 summary scores. Conclusion Correlated physical and mental health summary scores for the SF-36 and SF-12 derived from an obliquely rotated factor solution should be used along with the uncorrelated summary scores. The new scoring algorithm can reduce inconsistent results between the SF-36 scale scores and physical and mental health summary scores reported in some prior studies. (Subscripts C = correlated and UC = uncorrelated

  11. Naive scoring of human sleep based on a hidden Markov model of the electroencephalogram.

    Science.gov (United States)

    Yaghouby, Farid; Modur, Pradeep; Sunderam, Sridhar

    2014-01-01

    Clinical sleep scoring involves tedious visual review of overnight polysomnograms by a human expert. Many attempts have been made to automate the process by training computer algorithms such as support vector machines and hidden Markov models (HMMs) to replicate human scoring. Such supervised classifiers are typically trained on scored data and then validated on scored out-of-sample data. Here we describe a methodology based on HMMs for scoring an overnight sleep recording without the benefit of a trained initial model. The number of states in the data is not known a priori and is optimized using a Bayes information criterion. When tested on a 22-subject database, this unsupervised classifier agreed well with human scores (mean of Cohen's kappa > 0.7). The HMM also outperformed other unsupervised classifiers (Gaussian mixture models, k-means, and linkage trees), that are capable of naive classification but do not model dynamics, by a significant margin (p < 0.05).

  12. Identification of altered pathways in breast cancer based on individualized pathway aberrance score.

    Science.gov (United States)

    Shi, Sheng-Hong; Zhang, Wei; Jiang, Jing; Sun, Long

    2017-08-01

    The objective of the present study was to identify altered pathways in breast cancer based on the individualized pathway aberrance score (iPAS) method combined with the normal reference (nRef). There were 4 steps to identify altered pathways using the iPAS method: Data preprocessing conducted by the robust multi-array average (RMA) algorithm; gene-level statistics based on average Z ; pathway-level statistics according to iPAS; and a significance test dependent on 1 sample Wilcoxon test. The altered pathways were validated by calculating the changed percentage of each pathway in tumor samples and comparing them with pathways from differentially expressed genes (DEGs). A total of 688 altered pathways with Ppathways were involved in the total 688 altered pathways, which may validate the present results. In addition, there were 324 DEGs and 155 common genes between DEGs and pathway genes. DEGs and common genes were enriched in the same 9 significant terms, which also were members of altered pathways. The iPAS method was suitable for identifying altered pathways in breast cancer. Altered pathways (such as KIF and PLK mediated events) were important for understanding breast cancer mechanisms and for the future application of customized therapeutic decisions.

  13. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  14. Categorizing segmentation quality using a quantitative quality assurance algorithm

    International Nuclear Information System (INIS)

    Rodrigues, George; Louie, Alexander; Best, Lara

    2012-01-01

    Obtaining high levels of contouring consistency is a major limiting step in optimizing the radiotherapeutic ratio. We describe a novel quantitative methodology for the quality assurance (QA) of contour compliance referenced against a community set of contouring experts. Two clinical tumour site scenarios (10 lung cases and one prostate case) were used with QA algorithm. For each case, multiple physicians (lung: n = 6, prostate: n = 25) segmented various target/organ at risk (OAR) structures to define a set of community reference contours. For each set of community contours, a consensus contour (Simultaneous Truth and Performance Level Estimation (STAPLE)) was created. Differences between each individual community contour versus the group consensus contour were quantified by consensus-based contouring penalty metric (PM) scores. New observers segmented these same cases to calculate individual PM scores (for each unique target/OAR) for each new observer–STAPLE pair for comparison against the community and consensus contours. Four physicians contoured the 10 lung cases for a total of 72 contours for quality assurance evaluation against the previously derived community consensus contours. A total of 16 outlier contours were identified by the QA system of which 11 outliers were due to over-contouring discrepancies, three were due to over-/under-contouring discrepancies, and two were due to missing/incorrect nodal contours. In the prostate scenario involving six physicians, the QA system detected a missing penile bulb contour, systematic inner-bladder contouring, and under-contouring of the upper/anterior rectum. A practical methodology for QA has been demonstrated with future clinical trial credentialing, medical education and auto-contouring assessment applications.

  15. Flexible and efficient genome tiling design with penalized uniqueness score

    Directory of Open Access Journals (Sweden)

    Du Yang

    2012-12-01

    Full Text Available Abstract Background As a powerful tool in whole genome analysis, tiling array has been widely used in the answering of many genomic questions. Now it could also serve as a capture device for the library preparation in the popular high throughput sequencing experiments. Thus, a flexible and efficient tiling array design approach is still needed and could assist in various types and scales of transcriptomic experiment. Results In this paper, we address issues and challenges in designing probes suitable for tiling array applications and targeted sequencing. In particular, we define the penalized uniqueness score, which serves as a controlling criterion to eliminate potential cross-hybridization, and a flexible tiling array design pipeline. Unlike BLAST or simple suffix array based methods, computing and using our uniqueness measurement can be more efficient for large scale design and require less memory. The parameters provided could assist in various types of genomic tiling task. In addition, using both commercial array data and experiment data we show, unlike previously claimed, that palindromic sequence exhibiting relatively lower uniqueness. Conclusions Our proposed penalized uniqueness score could serve as a better indicator for cross hybridization with higher sensitivity and specificity, giving more control of expected array quality. The flexible tiling design algorithm incorporating the penalized uniqueness score was shown to give higher coverage and resolution. The package to calculate the penalized uniqueness score and the described probe selection algorithm are implemented as a Perl program, which is freely available at http://www1.fbn-dummerstorf.de/en/forschung/fbs/fb3/paper/2012-yang-1/OTAD.v1.1.tar.gz.

  16. Linkage between company scores and stock returns

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2017-12-01

    Full Text Available Previous studies on company scores conducted at firm-level, generally concluded that there exists a positive relation between company scores and stock returns. Motivated by these studies, this study examines the relationship between company scores (Corporate Governance Score, Economic Score, Environmental Score, and Social Score and stock returns, both at portfolio-level analysis and firm-level cross-sectional regressions. In portfolio-level analysis, stocks are sorted based on each company scores and quintile portfolio are formed with different levels of company scores. Then, existence and significance of raw returns and risk-adjusted returns difference between portfolios with the extreme company scores (portfolio 10 and portfolio 1 is tested. In addition, firm-level cross-sectional regression is performed to examine the significance of company scores effects with control variables. While portfolio-level analysis results indicate that there is no significant relation between company scores and stock returns; firm-level analysis indicates that economic, environmental, and social scores have effect on stock returns, however, significance and direction of these effects change, depending on the included control variables in the cross-sectional regression.

  17. Validation of the 12-gene colon cancer recurrence score as a predictor of recurrence risk in stage II and III rectal cancer patients.

    Science.gov (United States)

    Reimers, Marlies S; Kuppen, Peter J K; Lee, Mark; Lopatin, Margarita; Tezcan, Haluk; Putter, Hein; Clark-Langone, Kim; Liefers, Gerrit Jan; Shak, Steve; van de Velde, Cornelis J H

    2014-11-01

    The 12-gene Recurrence Score assay is a validated predictor of recurrence risk in stage II and III colon cancer patients. We conducted a prospectively designed study to validate this assay for prediction of recurrence risk in stage II and III rectal cancer patients from the Dutch Total Mesorectal Excision (TME) trial. RNA was extracted from fixed paraffin-embedded primary rectal tumor tissue from stage II and III patients randomized to TME surgery alone, without (neo)adjuvant treatment. Recurrence Score was assessed by quantitative real time-polymerase chain reaction using previously validated colon cancer genes and algorithm. Data were analysed by Cox proportional hazards regression, adjusting for stage and resection margin status. All statistical tests were two-sided. Recurrence Score predicted risk of recurrence (hazard ratio [HR] = 1.57, 95% confidence interval [CI] = 1.11 to 2.21, P = .01), risk of distant recurrence (HR = 1.50, 95% CI = 1.04 to 2.17, P = .03), and rectal cancer-specific survival (HR = 1.64, 95% CI = 1.15 to 2.34, P = .007). The effect of Recurrence Score was most prominent in stage II patients and attenuated with more advanced stage (P(interaction) ≤ .007 for each endpoint). In stage II, five-year cumulative incidence of recurrence ranged from 11.1% in the predefined low Recurrence Score group (48.5% of patients) to 43.3% in the high Recurrence Score group (23.1% of patients). The 12-gene Recurrence Score is a predictor of recurrence risk and cancer-specific survival in rectal cancer patients treated with surgery alone, suggesting a similar underlying biology in colon and rectal cancers. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Total volume versus bouts

    DEFF Research Database (Denmark)

    Chinapaw, Mai; Klakk, Heidi; Møller, Niels Christian

    2018-01-01

    BACKGROUND/OBJECTIVES: Examine the prospective relationship of total volume versus bouts of sedentary behaviour (SB) and moderate-to-vigorous physical activity (MVPA) with cardiometabolic risk in children. In addition, the moderating effects of weight status and MVPA were explored. SUBJECTS....../METHODS: Longitudinal study including 454 primary school children (mean age 10.3 years). Total volume and bouts (i.e. ≥10 min consecutive minutes) of MVPA and SB were assessed by accelerometry in Nov 2009/Jan 2010 (T1) and Aug/Oct 2010 (T2). Triglycerides, total cholesterol/HDL cholesterol ratio (TC:HDLC ratio......, with or without mutual adjustments between MVPA and SB. The moderating effects of weight status and MVPA (for SB only) were examined by adding interaction terms. RESULTS: Children engaged daily in about 60 min of total MVPA and 0-15 min/week in MVPA bouts. Mean total sedentary time was around 7 h/day with over 3...

  19. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  20. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  1. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  2. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  3. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  4. Cardiovascular risk scores for coronary atherosclerosis.

    Science.gov (United States)

    Yalcin, Murat; Kardesoglu, Ejder; Aparci, Mustafa; Isilak, Zafer; Uz, Omer; Yiginer, Omer; Ozmen, Namik; Cingozbay, Bekir Yilmaz; Uzun, Mehmet; Cebeci, Bekir Sitki

    2012-10-01

    The objective of this study was to compare frequently used cardiovascular risk scores in predicting the presence of coronary artery disease (CAD) and 3-vessel disease. In 350 consecutive patients (218 men and 132 women) who underwent coronary angiography, the cardiovascular risk level was determined using the Framingham Risk Score (FRS), the Modified Framingham Risk Score (MFRS), the Prospective Cardiovascular Münster (PROCAM) score, and the Systematic Coronary Risk Evaluation (SCORE). The area under the curve for receiver operating characteristic curves showed that FRS had more predictive value than the other scores for CAD (area under curve, 0.76, P MFRS, PROCAM, and SCORE) may predict the presence and severity of coronary atherosclerosis.The FRS had better predictive value than the other scores.

  5. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    Science.gov (United States)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  6. SCORE DIGITAL TECHNOLOGY: THE CONVERGENCE

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2013-12-01

    Full Text Available Explores the role of digital scorewriters in today's culture, education, and music industry and media environment. The main principle of the development of software is not only publishing innovation (relating to the sheet music, and integration into the area of composition, arrangement, education, creative process for works based on digital technology (films, television and radio broadcasting, Internet, audio and video art. Therefore the own convergence of musically-computer technology is a total phenomenon: notation program combined with means MIDI-sequencer, audio and video editor. The article contains the unique interview with the creator of music notation processors.

  7. An efficient non-dominated sorting method for evolutionary algorithms.

    Science.gov (United States)

    Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F

    2008-01-01

    We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.

  8. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    Energy Technology Data Exchange (ETDEWEB)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim; Alite, Fiori; Block, Alec M.; Choi, Mehee; Emami, Bahman; Harkenrider, Matthew M.; Solanki, Abhishek A.; Roeske, John C., E-mail: jroeske@lumc.edu

    2016-11-15

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scale (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.

  9. Total versus subtotal hysterectomy

    DEFF Research Database (Denmark)

    Gimbel, Helga; Zobbe, Vibeke; Andersen, Anna Birthe

    2005-01-01

    The aim of this study was to compare total and subtotal abdominal hysterectomy for benign indications, with regard to urinary incontinence, postoperative complications, quality of life (SF-36), constipation, prolapse, satisfaction with sexual life, and pelvic pain at 1-year postoperative. Eighty...... women chose total and 105 women chose subtotal abdominal hysterectomy. No significant differences were found between the 2 operation methods in any of the outcome measures at 12 months. Fourteen women (15%) from the subtotal abdominal hysterectomy group experienced vaginal bleeding and three women had...

  10. Qualità totale e mobilità totale Total Quality and Total Mobility

    Directory of Open Access Journals (Sweden)

    Giuseppe Trieste

    2010-05-01

    Full Text Available FIABA ONLUS (Italian Fund for Elimination of Architectural Barriers was founded in 2000 with the aim of promoting a culture of equal opportunities and, above all, it has as its main goal to involve public and private institutions to create a really accessible and usable environment for everyone. Total accessibility, Total usability and Total mobility are key indicators to define quality of life within cities. A supportive environment that is free of architectural, cultural and psychological barriers allows everyone to live with ease and universality. In fact, people who access to goods and services in the urban context can use to their advantage time and space, so they can do their activities and can maintain relationships that are deemed significant for their social life. The main aim of urban accessibility is to raise the comfort of space for citizens, eliminating all barriers that discriminate people, and prevent from an equality of opportunity. “FIABA FUND - City of ... for the removal of architectural barriers” is an idea of FIABA that has already affected many regions of Italy as Lazio, Lombardy, Campania, Abruzzi and Calabria. It is a National project which provides for opening a bank account in the cities of referring, in which for the first time, all together, individuals and private and public institutions can make a donation to fund initiatives for the removal of architectural barriers within its own territory for a real and effective total accessibility. Last February the fund was launched in Rome with the aim of achieving a Capital without barriers and a Town European model of accessibility and usability. Urban mobility is a prerequisite to access to goods and services, and to organize activities related to daily life. FIABA promotes the concept of sustainable mobility for all, supported by the European Commission’s White Paper. We need a cultural change in management and organization of public means, which might focus on

  11. Nursing Activities Score and Acute Kidney Injury.

    Science.gov (United States)

    Coelho, Filipe Utuari de Andrade; Watanabe, Mirian; Fonseca, Cassiane Dezoti da; Padilha, Katia Grillo; Vattimo, Maria de Fátima Fernandes

    2017-01-01

    to evaluate the nursing workload in intensive care patients with acute kidney injury (AKI). A quantitative study, conducted in an intensive care unit, from April to August of 2015. The Nursing Activities Score (NAS) and Kidney Disease Improving Global Outcomes (KDIGO) were used to measure nursing workload and to classify the stage of AKI, respectively. A total of 190 patients were included. Patients who developed AKI (44.2%) had higher NAS when compared to those without AKI (43.7% vs 40.7%), p <0.001. Patients with stage 1, 2 and 3 AKI showed higher NAS than those without AKI. A relationship was identified between stage 2 and 3 with those without AKI (p = 0.002 and p <0.001). The NAS was associated with the presence of AKI, the score increased with the progression of the stages, and it was associated with AKI, stage 2 and 3. avaliar a carga de trabalho de enfermagem em pacientes de terapia intensiva com lesão renal aguda (LRA). estudo quantitativo, em Unidade de Terapia Intensiva, no período de abril a agosto de 2015. O Nursing Activities Score (NAS) e o Kidney Disease Improving Global Outcomes (KDIGO) foram utilizados para medir a carga de trabalho de enfermagem e classificar o estágio da LRA, respectivamente. foram incluídos 190 pacientes. Os pacientes que desenvolveram LRA (44,2%) possuíam NAS superiores quando comparados aos sem LRA (43,7% vs 40,7%), p<0,001. Os pacientes com LRA nos estágios 1, 2 e 3 de LRA demonstraram NAS superiores aos sem LRA, houve relação entre os estágios 2 e 3 com os sem LRA, p=0,002 e p<0,001. o NAS apresentou associação com a existência de LRA, visto que seu valor aumenta com a progressão dos estágios, tendo associação com os estágios 2 e 3 de LRA.

  12. Improved collaborative filtering recommendation algorithm of similarity measure

    Science.gov (United States)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  13. High Baseline Postconcussion Symptom Scores and Concussion Outcomes in Athletes.

    Science.gov (United States)

    Custer, Aimee; Sufrinko, Alicia; Elbin, R J; Covassin, Tracey; Collins, Micky; Kontos, Anthony

    2016-02-01

    Some healthy athletes report high levels of baseline concussion symptoms, which may be attributable to several factors (eg, illness, personality, somaticizing). However, the role of baseline symptoms in outcomes after sport-related concussion (SRC) has not been empirically examined. To determine if athletes with high symptom scores at baseline performed worse than athletes without baseline symptoms on neurocognitive testing after SRC. Cohort study. High school and collegiate athletic programs. A total of 670 high school and collegiate athletes participated in the study. Participants were divided into groups with either no baseline symptoms (Postconcussion Symptom Scale [PCSS] score = 0, n = 247) or a high level of baseline symptoms (PCSS score > 18 [top 10% of sample], n = 68). Participants were evaluated at baseline and 2 to 7 days after SRC with the Immediate Post-concussion Assessment and Cognitive Test and PCSS. Outcome measures were Immediate Post-concussion Assessment and Cognitive Test composite scores (verbal memory, visual memory, visual motor processing speed, and reaction time) and total symptom score on the PCSS. The groups were compared using repeated-measures analyses of variance with Bonferroni correction to assess interactions between group and time for symptoms and neurocognitive impairment. The no-symptoms group represented 38% of the original sample, whereas the high-symptoms group represented 11% of the sample. The high-symptoms group experienced a larger decline from preinjury to postinjury than the no-symptoms group in verbal (P = .03) and visual memory (P = .05). However, total concussion-symptom scores increased from preinjury to postinjury for the no-symptoms group (P = .001) but remained stable for the high-symptoms group. Reported baseline symptoms may help identify athletes at risk for worse outcomes after SRC. Clinicians should examine baseline symptom levels to better identify patients for earlier referral and treatment for their

  14. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    Science.gov (United States)

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  15. Risk score for contrast induced nephropathy following percutaneous coronary intervention

    International Nuclear Information System (INIS)

    Ghani, Amal Abdel; Tohamy, Khalid Y.

    2009-01-01

    Contrast-induced nephropathy (CIN) is an important cause of acute renal failure. Identification of risk factors of CIN and creating a simple risk scoring for CIN after percutaneous coronary intervention (PCI) is important. A prospective single center study was conducted in Kuwait chest disease hospital. All patients admitted to chest disease hospital for PCI from March to May 2005 were included in the study. Total of 247 patients were randomly assigned for the development dataset and 100 for the validation set using the simple random method. The overall occurrence of CIN in the development set was 5.52%. Using multivariate analysis; basal Serum creatinine, shock, female gender, multivessel PCI, and diabetes mellitus were identified as risk factors. Scores assigned to different variables yielded basal creatinine > 115 micron mol/L with the highest score(7), followed by shock (3), female gender, multivessel PCI and diabetes mellitus had the same score (2). Patients were further risk stratified into low risk score ( 1 2). The developed CIN model demonstrated good discriminative power in the validation population. In conclusion, use of a simple risk score for CIN can predict the probability of CIN after PCI; this however needs further validation in larger multicenter trials. (author)

  16. Risk score to predict gastrointestinal bleeding after acute ischemic stroke.

    Science.gov (United States)

    Ji, Ruijun; Shen, Haipeng; Pan, Yuesong; Wang, Penglian; Liu, Gaifen; Wang, Yilong; Li, Hao; Singhal, Aneesh B; Wang, Yongjun

    2014-07-25

    Gastrointestinal bleeding (GIB) is a common and often serious complication after stroke. Although several risk factors for post-stroke GIB have been identified, no reliable or validated scoring system is currently available to predict GIB after acute stroke in routine clinical practice or clinical trials. In the present study, we aimed to develop and validate a risk model (acute ischemic stroke associated gastrointestinal bleeding score, the AIS-GIB score) to predict in-hospital GIB after acute ischemic stroke. The AIS-GIB score was developed from data in the China National Stroke Registry (CNSR). Eligible patients in the CNSR were randomly divided into derivation (60%) and internal validation (40%) cohorts. External validation was performed using data from the prospective Chinese Intracranial Atherosclerosis Study (CICAS). Independent predictors of in-hospital GIB were obtained using multivariable logistic regression in the derivation cohort, and β-coefficients were used to generate point scoring system for the AIS-GIB. The area under the receiver operating characteristic curve (AUROC) and the Hosmer-Lemeshow goodness-of-fit test were used to assess model discrimination and calibration, respectively. A total of 8,820, 5,882, and 2,938 patients were enrolled in the derivation, internal validation and external validation cohorts. The overall in-hospital GIB after AIS was 2.6%, 2.3%, and 1.5% in the derivation, internal, and external validation cohort, respectively. An 18-point AIS-GIB score was developed from the set of independent predictors of GIB including age, gender, history of hypertension, hepatic cirrhosis, peptic ulcer or previous GIB, pre-stroke dependence, admission National Institutes of Health stroke scale score, Glasgow Coma Scale score and stroke subtype (Oxfordshire). The AIS-GIB score showed good discrimination in the derivation (0.79; 95% CI, 0.764-0.825), internal (0.78; 95% CI, 0.74-0.82) and external (0.76; 95% CI, 0.71-0.82) validation cohorts

  17. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  18. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  19. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  20. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  1. Interobserver variability of the neurological optimality score

    NARCIS (Netherlands)

    Monincx, W. M.; Smolders-de Haas, H.; Bonsel, G. J.; Zondervan, H. A.

    1999-01-01

    To assess the interobserver reliability of the neurological optimality score. The neurological optimality score of 21 full term healthy, neurologically normal newborn infants was determined by two well trained observers. The interclass correlation coefficient was 0.31. Kappa for optimality (score of

  2. Breaking of scored tablets : a review

    NARCIS (Netherlands)

    van Santen, E; Barends, D M; Frijlink, H W

    The literature was reviewed regarding advantages, problems and performance indicators of score lines. Scored tablets provide dose flexibility, ease of swallowing and may reduce the costs of medication. However, many patients are confronted with scored tablets that are broken unequally and with

  3. Validation of Automated Scoring of Science Assessments

    Science.gov (United States)

    Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C.

    2016-01-01

    Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…

  4. Assessment of the innovative quality of agomelatine through the Innovation Assessment Algorithm

    Directory of Open Access Journals (Sweden)

    Liliana Civalleri

    2012-09-01

    Full Text Available Aim: the aim of this study was to assess the innovative quality of a medicine based on agomelatine, authorized by the European Commission through a centralized procedure on 19th February 2009 and distributed in Italy under the brands Valdoxan® and Thymanax®.Methodology: the degree of innovation of agomelatine was determined through the Innovation Assessment Algorithm (IAA, which considers the innovative quality of a medicine as a combination of multiple properties. The algorithm may be represented as a decision tree, with each branch corresponding to a property connected with innovation and having a fixed numerical value. The sum of these values establishes the degree of innovation of the medicine. The IAA is articulated in two phases: the first assesses the efficacy of the drug based on the clinical trials presented in support of the registration application (IAA-efficacy; the second reconsiders the degree of innovation on the basis of the efficacy and safety data resulting from clinical practice once the drug has been placed on the market (IAA-effectiveness.Results and conclusions: the score obtained for agomelatine was 592.73 in the efficacy phase and 291.3 in the effectiveness phase. The total score for the two phases was 884, which is equivalent to a good degree of innovation for the molecule

  5. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  6. CSF total protein

    Science.gov (United States)

    CSF total protein is a test to determine the amount of protein in your spinal fluid, also called cerebrospinal fluid (CSF). ... The normal protein range varies from lab to lab, but is typically about 15 to 60 milligrams per deciliter (mg/dL) ...

  7. Total body irradiation

    International Nuclear Information System (INIS)

    Novack, D.H.; Kiley, J.P.

    1987-01-01

    The multitude of papers and conferences in recent years on the use of very large megavoltage radiation fields indicates an increased interest in total body, hemibody, and total nodal radiotherapy for various clinical situations. These include high dose total body irradiation (TBI) to destroy the bone marrow and leukemic cells and provide immunosuppression prior to a bone marrow transplant, high dose total lymphoid irradiation (TLI) prior to bone marrow transplantation in severe aplastic anemia, low dose TBI in the treatment of lymphocytic leukemias or lymphomas, and hemibody irradiation (HBI) in the treatment of advanced multiple myeloma. Although accurate provision of a specific dose and the desired degree of dose homogeneity are two of the physicist's major considerations for all radiotherapy techniques, these tasks are even more demanding for large field radiotherapy. Because most large field radiotherapy is done at an extended distance for complex patient geometries, basic dosimetry data measured at the standard distance (isocenter) must be verified or supplemented. This paper discusses some of the special dosimetric problems of large field radiotherapy, with specific examples given of the dosimetry of the TBI program for bone marrow transplant at the authors' hospital

  8. Total design of participation

    DEFF Research Database (Denmark)

    Munch, Anders V.

    2016-01-01

    The idea of design as an art made not only for the people, but also by the people is an old dream going back at least to William Morris. It is, however, reappearing vigoriously in many kinds of design activism and grows out of the visions of a Total Design of society. The ideas of participation b...

  9. Total Quality Management Simplified.

    Science.gov (United States)

    Arias, Pam

    1995-01-01

    Maintains that Total Quality Management (TQM) is one method that helps to monitor and improve the quality of child care. Lists four steps for a child-care center to design and implement its own TQM program. Suggests that quality assurance in child-care settings is an ongoing process, and that TQM programs help in providing consistent, high-quality…

  10. Total Quality Management Seminar.

    Science.gov (United States)

    Massachusetts Career Development Inst., Springfield.

    This booklet is one of six texts from a workplace literacy curriculum designed to assist learners in facing the increased demands of the workplace. The booklet contains seven sections that cover the following topics: (1) meaning of total quality management (TQM); (2) the customer; (3) the organization's culture; (4) comparison of management…

  11. Total photon absorption

    International Nuclear Information System (INIS)

    Carlos, P.

    1985-01-01

    Experimental methods using real photon beams for measurements of total photonuclear absorption cross section σ(Tot : E/sub γ/) are recalled. Most recent σ(Tot : E/sub γ/)results for complex nuclei and in the nucleon resonance region are presented

  12. Total 2004 annual report

    International Nuclear Information System (INIS)

    2004-01-01

    This annual report of the Group Total brings information and economic data on the following topics, for the year 2004: the corporate governance, the corporate social responsibility, the shareholder notebook, the management report, the activities, the upstream (exploration and production) and downstream (refining and marketing) operating, chemicals and other matters. (A.L.B.)

  13. Total Water Management - Report

    Science.gov (United States)

    There is a growing need for urban water managers to take a more holistic view of their water resource systems as population growth, urbanization, and current operations put different stresses on the environment and urban infrastructure. Total Water Management (TWM) is an approac...

  14. A comparative study on assessment procedures and metric properties of two scoring systems of the Coma Recovery Scale-Revised items: standard and modified scores.

    Science.gov (United States)

    Sattin, Davide; Lovaglio, Piergiorgio; Brenna, Greta; Covelli, Venusia; Rossi Sebastiano, Davide; Duran, Dunja; Minati, Ludovico; Giovannetti, Ambra Mara; Rosazza, Cristina; Bersano, Anna; Nigri, Anna; Ferraro, Stefania; Leonardi, Matilde

    2017-09-01

    The study compared the metric characteristics (discriminant capacity and factorial structure) of two different methods for scoring the items of the Coma Recovery Scale-Revised and it analysed scale scores collected using the standard assessment procedure and a new proposed method. Cross sectional design/methodological study. Inpatient, neurological unit. A total of 153 patients with disorders of consciousness were consecutively enrolled between 2011 and 2013. All patients were assessed with the Coma Recovery Scale-Revised using standard (rater 1) and inverted (rater 2) procedures. Coma Recovery Scale-Revised score, number of cognitive and reflex behaviours and diagnosis. Regarding patient assessment, rater 1 using standard and rater 2 using inverted procedures obtained the same best scores for each subscale of the Coma Recovery Scale-Revised for all patients, so no clinical (and statistical) difference was found between the two procedures. In 11 patients (7.7%), rater 2 noted that some Coma Recovery Scale-Revised codified behavioural responses were not found during assessment, although higher response categories were present. A total of 51 (36%) patients presented the same Coma Recovery Scale-Revised scores of 7 or 8 using a standard score, whereas no overlap was found using the modified score. Unidimensionality was confirmed for both score systems. The Coma Recovery Scale Modified Score showed a higher discriminant capacity than the standard score and a monofactorial structure was also supported. The inverted assessment procedure could be a useful evaluation method for the assessment of patients with disorder of consciousness diagnosis.

  15. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  16. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  17. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  18. Animation of planning algorithms

    OpenAIRE

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  19. Secondary Vertex Finder Algorithm

    CERN Document Server

    Heer, Sebastian; The ATLAS collaboration

    2017-01-01

    If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.

  20. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  1. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  2. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  3. Algorithm For Hypersonic Flow In Chemical Equilibrium

    Science.gov (United States)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  4. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  5. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  6. Reliability of Modern Scores to Predict Long-Term Mortality After Isolated Aortic Valve Operations.

    Science.gov (United States)

    Barili, Fabio; Pacini, Davide; D'Ovidio, Mariangela; Ventura, Martina; Alamanni, Francesco; Di Bartolomeo, Roberto; Grossi, Claudio; Davoli, Marina; Fusco, Danilo; Perucci, Carlo; Parolari, Alessandro

    2016-02-01

    Contemporary scores for estimating perioperative death have been proposed to also predict also long-term death. The aim of the study was to evaluate the performance of the updated European System for Cardiac Operative Risk Evaluation II, The Society of Thoracic Surgeons Predicted Risk of Mortality score, and the Age, Creatinine, Left Ventricular Ejection Fraction score for predicting long-term mortality in a contemporary cohort of isolated aortic valve replacement (AVR). We also sought to develop for each score a simple algorithm based on predicted perioperative risk to predict long-term survival. Complete data on 1,444 patients who underwent isolated AVR in a 7-year period were retrieved from three prospective institutional databases and linked with the Italian Tax Register Information System. Data were evaluated with performance analyses and time-to-event semiparametric regression. Survival was 83.0% ± 1.1% at 5 years and 67.8 ± 1.9% at 8 years. Discrimination and calibration of all three scores both worsened for prediction of death at 1 year and 5 years. Nonetheless, a significant relationship was found between long-term survival and quartiles of scores (p System for Cardiac Operative Risk Evaluation II, 1.34 (95% CI, 1.28 to 1.40) for the Society of Thoracic Surgeons score, and 1.08 (95% CI, 1.06 to 1.10) for the Age, Creatinine, Left Ventricular Ejection Fraction score. The predicted risk generated by European System for Cardiac Operative Risk Evaluation II, The Society of Thoracic Surgeons score, and Age, Creatinine, Left Ventricular Ejection Fraction scores cannot also be considered a direct estimate of the long-term risk for death. Nonetheless, the three scores can be used to derive an estimate of long-term risk of death in patients who undergo isolated AVR with the use of a simple algorithm. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Finger Tapping Clinimetric Score Prediction in Parkinson's Disease Using Low-Cost Accelerometers

    Directory of Open Access Journals (Sweden)

    Julien Stamatakis

    2013-01-01

    algorithm were used to identify the most relevant features in the prediction of MDS-UPDRS FT scores, given by 3 specialists in movement disorders (SMDs. The Goodman-Kruskal Gamma index obtained (0.961, depicting the predictive performance of the model, is similar to those obtained between the individual scores given by the SMD (0.870 to 0.970. The automatic prediction of MDS-UPDRS scores using the proposed system may be valuable in clinical trials designed to evaluate and modify motor disability in PD patients.

  8. Total 2003 Results

    International Nuclear Information System (INIS)

    2003-01-01

    This document presents the 2003 results of Total Group: consolidated account, special items, number of shares, market environment, 4. quarter 2003 results, full year 2003 results, upstream (key figures, proved reserves), downstream key figures, chemicals key figures, parent company accounts and proposed dividends, 2004 sensitivities, summary and outlook, operating information by segment for the 4. quarter and full year 2003: upstream (combined liquids and gas production by region, liquids production by region, gas production by region), downstream (refinery throughput by region, refined product sales by region, chemicals), impact of allocating contribution of Cepsa to net operating income by business segment: equity in income (loss) and affiliates and other items, Total financial statements: consolidated statement of income, consolidated balance sheet (assets, liabilities and shareholder's equity), consolidated statements of cash flows, business segments information. (J.S.)

  9. TOTAL PERFORMANCE SCORECARD

    Directory of Open Access Journals (Sweden)

    Anca ȘERBAN

    2013-06-01

    Full Text Available The purpose of this paper is to present the evolution of the Balanced Scorecard from a measurement instrument to a strategic performance management tool and to highlight the advantages of implementing the Total Performance Scorecard, especially for Human Resource Management. The study has been accomplished using the methodology of bibliographic study and various secondary sources. Implementing the classical Balanced Scorecard indicated over the years, repeatedly failure. It can be indicated that the crucial level is determined by the learning and growth perspective. It has been developed from a human perspective, which focused on staff satisfaction, innovation perspective with focus on future developments. Integrating the Total Performance Scorecard in an overall framework assures the company’s success, by keeping track of the individual goals, the company’s objectives and strategic directions. Like this, individual identity can be linked to corporate brand, individual aspirations to business goals and individual learning objectives to needed organizational capabilities.

  10. Total space in resolution

    Czech Academy of Sciences Publication Activity Database

    Bonacina, I.; Galesi, N.; Thapen, Neil

    2016-01-01

    Roč. 45, č. 5 (2016), s. 1894-1909 ISSN 0097-5397 R&D Projects: GA ČR GBP202/12/G061 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : total space * resolution random CNFs * proof complexity Subject RIV: BA - General Mathematics Impact factor: 1.433, year: 2016 http://epubs.siam.org/doi/10.1137/15M1023269

  11. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  12. Total - annual report 2005

    International Nuclear Information System (INIS)

    2006-01-01

    This annual report presents the activities and results of TOTAL S.A., french society on oil and gas. It deals with statistics, the managers, key information on financial data and risk factors, information on the Company, unresolved Staff Comments, employees, major Shareholders, consolidated statements, markets, security, financial risks, defaults dividend arrearages and delinquencies, controls and procedures, code of ethics and financial statements. (A.L.B.)

  13. Total Absorption Spectroscopy

    International Nuclear Information System (INIS)

    Rubio, B.; Gelletly, W.

    2007-01-01

    The problem of determining the distribution of beta decay strength (B(GT)) as a function of excitation energy in the daughter nucleus is discussed. Total Absorption Spectroscopy is shown to provide a way of determining the B(GT) precisely. A brief history of such measurements and a discussion of the advantages and disadvantages of this technique, is followed by examples of two recent studies using the technique. (authors)

  14. A propositional CONEstrip algorithm

    NARCIS (Netherlands)

    E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)

    2014-01-01

    textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations

  15. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...

  16. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...

  17. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  18. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  19. Algorithms in ambient intelligence

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.

    2005-01-01

    We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of

  20. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...