WorldWideScience

Sample records for two-point lod scores

  1. Effect of misspecification of gene frequency on the two-point LOD score.

    Science.gov (United States)

    Pal, D K; Durner, M; Greenberg, D A

    2001-11-01

    In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.

  2. Another procedure for the preliminary ordering of loci based on two point lod scores.

    Science.gov (United States)

    Curtis, D

    1994-01-01

    Because of the difficulty of performing full likelihood analysis over multiple loci and the large numbers of possible orders, a number of methods have been proposed for quickly evaluating orders and, to a lesser extent, for generating good orders. A new method is proposed which uses a function which is moderately laborious to compute, the sum of lod scores between all pairs of loci. This function can be smoothly minimized by initially allowing the loci to be placed anywhere in space, and only subsequently constraining them to lie along a one-dimensional map. Application of this approach to sample data suggests that it has promise and might usefully be combined with other methods when loci need to be ordered.

  3. The lod score method.

    Science.gov (United States)

    Rice, J P; Saccone, N L; Corbett, J

    2001-01-01

    The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  4. Distribution of model-based multipoint heterogeneity lod scores.

    Science.gov (United States)

    Xing, Chao; Morris, Nathan; Xing, Guan

    2010-12-01

    The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ(2) approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating th e distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution ½χ²₀+ ½χ²₁, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. © 2010 Wiley-Liss, Inc.

  5. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  6. Extension of the lod score: the mod score.

    Science.gov (United States)

    Clerget-Darpoux, F

    2001-01-01

    In 1955 Morton proposed the lod score method both for testing linkage between loci and for estimating the recombination fraction between them. If a disease is controlled by a gene at one of these loci, the lod score computation requires the prior specification of an underlying model that assigns the probabilities of genotypes from the observed phenotypes. To address the case of linkage studies for diseases with unknown mode of inheritance, we suggested (Clerget-Darpoux et al., 1986) extending the lod score function to a so-called mod score function. In this function, the variables are both the recombination fraction and the disease model parameters. Maximizing the mod score function over all these parameters amounts to maximizing the probability of marker data conditional on the disease status. Under the absence of linkage, the mod score conforms to a chi-square distribution, with extra degrees of freedom in comparison to the lod score function (MacLean et al., 1993). The mod score is asymptotically maximum for the true disease model (Clerget-Darpoux and Bonaïti-Pellié, 1992; Hodge and Elston, 1994). Consequently, the power to detect linkage through mod score will be highest when the space of models where the maximization is performed includes the true model. On the other hand, one must avoid overparametrization of the model space. For example, when the approach is applied to affected sibpairs, only two constrained disease model parameters should be used (Knapp et al., 1994) for the mod score maximization. It is also important to emphasize the existence of a strong correlation between the disease gene location and the disease model. Consequently, there is poor resolution of the location of the susceptibility locus when the disease model at this locus is unknown. Of course, this is true regardless of the statistics used. The mod score may also be applied in a candidate gene strategy to model the potential effect of this gene in the disease. Since, however, it

  7. Major strengths and weaknesses of the lod score method.

    Science.gov (United States)

    Ott, J

    2001-01-01

    Strengths and weaknesses of the lod score method for human genetic linkage analysis are discussed. The main weakness is its requirement for the specification of a detailed inheritance model for the trait. Various strengths are identified. For example, the lod score (likelihood) method has optimality properties when the trait to be studied is known to follow a Mendelian mode of inheritance. The ELOD is a useful measure for information content of the data. The lod score method can emulate various "nonparametric" methods, and this emulation is equivalent to the nonparametric methods. Finally, the possibility of building errors into the analysis will prove to be essential for the large amount of linkage and disequilibrium data expected in the near future.

  8. Distribution of lod scores in oligogenic linkage analysis.

    Science.gov (United States)

    Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J

    2001-01-01

    In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.

  9. Posterior probability of linkage and maximal lod score.

    Science.gov (United States)

    Génin, E; Martinez, M; Clerget-Darpoux, F

    1995-01-01

    To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.

  10. Lod score curves for phase-unknown matings.

    Science.gov (United States)

    Hulbert-Shearon, T; Boehnke, M; Lange, K

    1996-01-01

    For a phase-unknown nuclear family, we show that the likelihood and lod score are unimodal, and we describe conditions under which the maximum occurs at recombination fraction theta = 0, theta = 1/2, and 0 < theta < 1/2. These simply stated necessary and sufficient conditions seem to have escaped the notice of previous statistical geneticists.

  11. Sensitivity of lod scores to changes in diagnostic status.

    Science.gov (United States)

    Hodge, S E; Greenberg, D A

    1992-05-01

    This paper investigates effects on lod scores when one individual in a data set changes diagnostic or recombinant status. First we examine the situation in which a single offspring in a nuclear family changes status. The nuclear-family situation, in addition to being of interest in its own right, also has general theoretical importance, since nuclear families are "transparent"; that is, one can track genetic events more precisely in nuclear families than in complex pedigrees. We demonstrate that in nuclear families log10 [(1-theta)/theta] gives an upper limit on the impact that a single offspring's change in status can have on the lod score at that recombination fraction (theta). These limits hold for a fully penetrant dominant condition and fully informative marker, in either phase-known or phase-unknown matings. Moreover, log10 [(1-theta)/theta] (where theta denotes the value of theta at which Zmax occurs) gives an upper limit on the impact of a single offspring's status change on the maximum lod score (Zmax). In extended pedigrees, in contrast to nuclear families, no comparable limit can be set on the impact of a single individual on the lod score. Complex pedigrees are subject to both stabilizing and destabilizing influences, and these are described. Finally, we describe a "sensitivity analysis," in which, after all linkage analysis is completed, every informative individual in the data set is changed, one at a time, to see the effect which each separate change has on the lod scores. The procedure includes identifying "critical individuals," i.e., those who would have the greatest impact on the lod scores, should their diagnostic status in fact change. To illustrate use of the sensitivity analysis, we apply it to the large bipolar pedigree reported by Egeland et al. and Kelsoe et al. We show that the changes in lod scores observed there, on the order of 1.1-1.2 per person, are not unusual. We recommend that investigators include a sensitivity analysis as a

  12. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  13. Comparison of empirical strategies to maximize GENEHUNTER lod scores.

    Science.gov (United States)

    Chen, C H; Finch, S J; Mendell, N R; Gordon, D

    1999-01-01

    We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.

  14. Quantification of type I error probabilities for heterogeneity LOD scores.

    Science.gov (United States)

    Abreu, Paula C; Hodge, Susan E; Greenberg, David A

    2002-02-01

    Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.

  15. Allele-sharing models: LOD scores and accurate linkage tests.

    Science.gov (United States)

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  16. Direct power comparisons between simple LOD scores and NPL scores for linkage analysis in complex diseases.

    Science.gov (United States)

    Abreu, P C; Greenberg, D A; Hodge, S E

    1999-09-01

    Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.

  17. The power to detect linkage in complex disease by means of simple LOD-score analyses.

    Science.gov (United States)

    Greenberg, D A; Abreu, P; Hodge, S E

    1998-09-01

    Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.

  18. The score statistic of the LD-lod analysis: detecting linkage adaptive to linkage disequilibrium.

    Science.gov (United States)

    Huang, J; Jiang, Y

    2001-01-01

    We study the properties of a modified lod score method for testing linkage that incorporates linkage disequilibrium (LD-lod). By examination of its score statistic, we show that the LD-lod score method adaptively combines two sources of information: (a) the IBD sharing score which is informative for linkage regardless of the existence of LD and (b) the contrast between allele-specific IBD sharing scores which is informative for linkage only in the presence of LD. We also consider the connection between the LD-lod score method and the transmission-disequilibrium test (TDT) for triad data and the mean test for affected sib pair (ASP) data. We show that, for triad data, the recessive LD-lod test is asymptotically equivalent to the TDT; and for ASP data, it is an adaptive combination of the TDT and the ASP mean test. We demonstrate that the LD-lod score method has relatively good statistical efficiency in comparison with the ASP mean test and the TDT for a broad range of LD and the genetic models considered in this report. Therefore, the LD-lod score method is an interesting approach for detecting linkage when the extent of LD is unknown, such as in a genome-wide screen with a dense set of genetic markers. Copyright 2001 S. Karger AG, Basel

  19. Lod scores for gene mapping in the presence of marker map uncertainty.

    Science.gov (United States)

    Stringham, H M; Boehnke, M

    2001-07-01

    Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.

  20. Percentiles of the null distribution of 2 maximum lod score tests.

    Science.gov (United States)

    Ulgen, Ayse; Yoo, Yun Joo; Gordon, Derek; Finch, Stephen J; Mendell, Nancy R

    2004-01-01

    We here consider the null distribution of the maximum lod score (LOD-M) obtained upon maximizing over transmission model parameters (penetrance values, dominance, and allele frequency) as well as the recombination fraction. Also considered is the lod score maximized over a fixed choice of genetic model parameters and recombination-fraction values set prior to the analysis (MMLS) as proposed by Hodge et al. The objective is to fit parametric distributions to MMLS and LOD-M. Our results are based on 3,600 simulations of samples of n = 100 nuclear families ascertained for having one affected member and at least one other sibling available for linkage analysis. Each null distribution is approximately a mixture p(2)(0) + (1 - p)(2)(v). The values of MMLS appear to fit the mixture 0.20(2)(0) + 0.80chi(2)(1.6). The mixture distribution 0.13(2)(0) + 0.87chi(2)(2.8). appears to describe the null distribution of LOD-M. From these results we derive a simple method for obtaining critical values of LOD-M and MMLS. Copyright 2004 S. Karger AG, Basel

  1. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  2. Validation of the LOD score compared with APACHE II score in prediction of the hospital outcome in critically ill patients.

    Science.gov (United States)

    Khwannimit, Bodin

    2008-01-01

    The Logistic Organ Dysfunction score (LOD) is an organ dysfunction score that can predict hospital mortality. The aim of this study was to validate the performance of the LOD score compared with the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in a mixed intensive care unit (ICU) at a tertiary referral university hospital in Thailand. The data were collected prospectively on consecutive ICU admissions over a 24 month period from July1, 2004 until June 30, 2006. Discrimination was evaluated by the area under the receiver operating characteristic curve (AUROC). The calibration was assessed by the Hosmer-Lemeshow goodness-of-fit H statistic. The overall fit of the model was evaluated by the Brier's score. Overall, 1,429 patients were enrolled during the study period. The mortality in the ICU was 20.9% and in the hospital was 27.9%. The median ICU and hospital lengths of stay were 3 and 18 days, respectively, for all patients. Both models showed excellent discrimination. The AUROC for the LOD and APACHE II were 0.860 [95% confidence interval (CI) = 0.838-0.882] and 0.898 (95% Cl = 0.879-0.917), respectively. The LOD score had perfect calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 10 (p = 0.44). However, the APACHE II had poor calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 75.69 (p < 0.001). Brier's score showed the overall fit for both models were 0.123 (95%Cl = 0.107-0.141) and 0.114 (0.098-0.132) for the LOD and APACHE II, respectively. Thus, the LOD score was found to be accurate for predicting hospital mortality for general critically ill patients in Thailand.

  3. Easy calculations of lod scores and genetic risks on small computers.

    Science.gov (United States)

    Lathrop, G M; Lalouel, J M

    1984-01-01

    A computer program that calculates lod scores and genetic risks for a wide variety of both qualitative and quantitative genetic traits is discussed. An illustration is given of the joint use of a genetic marker, affection status, and quantitative information in counseling situations regarding Duchenne muscular dystrophy. PMID:6585139

  4. Comparison of multipoint linkage analyses for quantitative traits in the CEPH data: parametric LOD scores, variance components LOD scores, and Bayes factors.

    Science.gov (United States)

    Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M

    2007-01-01

    We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.

  5. Smoothing of the bivariate LOD score for non-normal quantitative traits.

    Science.gov (United States)

    Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John

    2005-12-30

    Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.

  6. Conclusion of LOD-score analysis for family data generated under two-locus models.

    Science.gov (United States)

    Dizier, M H; Babron, M C; Clerget-Darpoux, F

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker.

  7. Conclusions of LOD-score analysis for family data generated under two-locus models

    Energy Technology Data Exchange (ETDEWEB)

    Dizier, M.H.; Babron, M.C.; Clergt-Darpoux, F. [Unite de Recherches d`Epidemiologie Genetique, Paris (France)

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker. 17 refs., 3 tabs.

  8. Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.

    Science.gov (United States)

    Tong, Liping; Thompson, Elizabeth

    2008-01-01

    To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel

  9. LOD score exclusion analyses for candidate QTLs using random population samples.

    Science.gov (United States)

    Deng, Hong-Wen

    2003-11-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.

  10. Effect of heterogeneity and assumed mode of inheritance on lod scores.

    Science.gov (United States)

    Durner, M; Greenberg, D A

    1992-02-01

    Heterogeneity is a major factor in many common, complex diseases and can confound linkage analysis. Using computer-simulated heterogeneous data we tested what effect unlinked families have on a linkage analysis when heterogeneity is not taken into account. We created 60 data sets of 40 nuclear families each with different proportions of linked and unlinked families and with different modes of inheritance. The ascertainment probability was 0.05, the disease had a penetrance of 0.6, and the recombination fraction for the linked families was zero. For the analysis we used a variety of assumed modes of inheritance and penetrances. Under these conditions we looked at the effect of the unlinked families on the lod score, the evaluation of the mode of inheritance, and the estimate of penetrance and of the recombination fraction in the linked families. 1. When the analysis was done under the correct mode of inheritance for the linked families, we found that the mode of inheritance of the unlinked families had minimal influence on the highest maximum lod score (MMLS) (i.e., we maximized the maximum lod score with respect to penetrance). Adding sporadic families decreased the MMLS less than adding recessive or dominant unlinked families. 2. The mixtures of dominant linked families with unlinked families always led to a higher MMLS when analyzed under the correct (dominant) mode of inheritance than when analyzed under the incorrect mode of inheritance. In the mixtures with recessive linked families, assuming the correct mode of inheritance generally led to a higher MMLS, but we observed broad variation.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Hereditary spastic paraplegia: LOD-score considerations for confirmation of linkage in a heterogeneous trait

    Energy Technology Data Exchange (ETDEWEB)

    Dube, M.P.; Kibar, Z.; Rouleau, G.A. [McGill Univ., Quebec (Canada)] [and others

    1997-03-01

    Hereditary spastic paraplegia (HSP) is a degenerative disorder of the motor system, defined by progressive weakness and spasticity of the lower limbs. HSP may be inherited as an autosomal dominant (AD), autosomal recessive, or an X-linked trait. AD HSP is genetically heterogeneous, and three loci have been identified so far: SPG3 maps to chromosome 14q, SPG4 to 2p, and SPG4a to 15q. We have undertaken linkage analysis with 21 uncomplicated AD families to the three AD HSP loci. We report significant linkage for three of our families to the SPG4 locus and exclude several families by multipoint linkage. We used linkage information from several different research teams to evaluate the statistical probability of linkage to the SPG4 locus for uncomplicated AD HSP families and established the critical LOD-score value necessary for confirmation of linkage to the SPG4 locus from Bayesian statistics. In addition, we calculated the empirical P-values for the LOD scores obtained with all families with computer simulation methods. Power to detect significant linkage, as well as type I error probabilities, were evaluated. This combined analytical approach permitted conclusive linkage analyses on small to medium-size families, under the restrictions of genetic heterogeneity. 19 refs., 1 fig., 1 tab.

  12. Hereditary spastic paraplegia: LOD-score considerations for confirmation of linkage in a heterogeneous trait.

    Science.gov (United States)

    Dubé, M P; Mlodzienski, M A; Kibar, Z; Farlow, M R; Ebers, G; Harper, P; Kolodny, E H; Rouleau, G A; Figlewicz, D A

    1997-03-01

    Hereditary spastic paraplegia (HSP) is a degenerative disorder of the motor system, defined by progressive weakness and spasticity of the lower limbs. HSP may be inherited as an autosomal dominant (AD), autosomal recessive, or an X-linked trait. AD HSP is genetically heterogeneous, and three loci have been identified so far: SPG3 maps to chromosome 14q, SPG4 to 2p, and SPG4a to 15q. We have undertaken linkage analysis with 21 uncomplicated AD families to the three AD HSP loci. We report significant linkage for three of our families to the SPG4 locus and exclude several families by multipoint linkage. We used linkage information from several different research teams to evaluate the statistical probability of linkage to the SPG4 locus for uncomplicated AD HSP families and established the critical LOD-score value necessary for confirmation of linkage to the SPG4 locus from Bayesian statistics. In addition, we calculated the empirical P-values for the LOD scores obtained with all families with computer simulation methods. Power to detect significant linkage, as well as type I error probabilities, were evaluated. This combined analytical approach permitted conclusive linkage analyses on small to medium-size families, under the restrictions of genetic heterogeneity.

  13. Replication of linkage to quantitative trait loci: variation in location and magnitude of the lod score.

    Science.gov (United States)

    Hsueh, W C; Göring, H H; Blangero, J; Mitchell, B D

    2001-01-01

    Replication of linkage signals from independent samples is considered an important step toward verifying the significance of linkage signals in studies of complex traits. The purpose of this empirical investigation was to examine the variability in the precision of localizing a quantitative trait locus (QTL) by analyzing multiple replicates of a simulated data set with the use of variance components-based methods. Specifically, we evaluated across replicates the variation in both the magnitude and the location of the peak lod scores. We analyzed QTLs whose effects accounted for 10-37% of the phenotypic variance in the quantitative traits. Our analyses revealed that the precision of QTL localization was directly related to the magnitude of the QTL effect. For a QTL with effect accounting for > 20% of total phenotypic variation, > 90% of the linkage peaks fall within 10 cM from the true gene location. We found no evidence that, for a given magnitude of the lod score, the presence of interaction influenced the precision of QTL localization.

  14. LOD score exclusion analyses for candidate genes using random population samples.

    Science.gov (United States)

    Deng, H W; Li, J; Recker, R R

    2001-05-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.

  15. D-dimer as marker for microcirculatory failure: correlation with LOD and APACHE II scores.

    Science.gov (United States)

    Angstwurm, Matthias W A; Reininger, Armin J; Spannagl, Michael

    2004-01-01

    The relevance of plasma d-dimer levels as marker for morbidity and organ dysfunction in severely ill patients is largely unknown. In a prospective study we determined d-dimer plasma levels of 800 unselected patients at admission to our intensive care unit. In 91% of the patients' samples d-dimer levels were elevated, in some patients up to several hundredfold as compared to normal values. The highest mean d-dimer values were present in the patient group with thromboembolic diseases, and particularly in non-survivors of pulmonary embolism. In patients with circulatory impairment (r=0.794) and in patients with infections (r=0.487) a statistically significant correlation was present between d-dimer levels and the APACHE II score (P<0.001). The logistic organ dysfunction score (LOD, P<0.001) correlated with d-dimer levels only in patients with circulatory impairment (r=0.474). On the contrary, patients without circulatory impairment demonstrated no correlation of d-dimer levels to the APACHE II or LOD score. Taking all patients together, no correlations of d-dimer levels with single organ failure or with indicators of infection could be detected. In conclusion, d-dimer plasma levels strongly correlated with the severity of the disease and organ dysfunction in patients with circulatory impairment or infections suggesting that elevated d-dimer levels may reflect the extent of microcirculatory failure. Thus, a therapeutic strategy to improve the microcirculation in such patients may be monitored using d-dimer plasma levels.

  16. Serial evaluation of the MODS, SOFA and LOD scores to predict ICU mortality in mixed critically ill patients.

    Science.gov (United States)

    Khwannimit, Bodin

    2008-09-01

    To perform a serial assessment and compare ability in predicting the intensive care unit (ICU) mortality of the multiple organ dysfunction score (MODS), sequential organ failure assessment (SOFA) and logistic organ dysfunction (LOD) score. The data were collected prospectively on consecutive ICU admissions over a 24-month period at a tertiary referral university hospital. The MODS, SOFA, and LOD scores were calculated on initial and repeated every 24 hrs. Two thousand fifty four patients were enrolled in the present study. The maximum and delta-scores of all the organ dysfunction scores correlated with ICU mortality. The maximum score of all models had better ability for predicting ICU mortality than initial or delta score. The areas under the receiver operating characteristic curve (AUC) for maximum scores was 0.892 for the MODS, 0.907 for the SOFA, and 0.92for the LOD. No statistical difference existed between all maximum scores and Acute Physiology and Chronic Health Evaluation II (APACHE II) score. Serial assessment of organ dysfunction during the ICU stay is reliable with ICU mortality. The maximum scores is the best discrimination comparable with APACHE II score in predicting ICU mortality.

  17. Linkage analysis in nuclear families. 2: Relationship between affected sib-pair tests and lod score analysis.

    Science.gov (United States)

    Knapp, M; Seuchter, S A; Baur, M P

    1994-01-01

    It is believed that the main advantage of affected sib-pair tests is that their application requires no information about the underlying genetic mechanism of the disease. However, here it is proved that the mean test, which can be considered the most prominent of the affected sib-pair tests, is equivalent to lod score analysis for an assumed recessive mode of inheritance, irrespective of the true mode of the disease. Further relationships of certain sib-pair tests and lod score analysis under specific assumed genetic modes are investigated.

  18. Using lod scores to detect sex differences in male-female recombination fractions.

    Science.gov (United States)

    Feenstra, B; Greenberg, D A; Hodge, S E

    2004-01-01

    Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum

  19. Pseudoautosomal region in schizophrenia: linkage analysis of seven loci by sib-pair and lod-score methods.

    Science.gov (United States)

    d'Amato, T; Waksman, G; Martinez, M; Laurent, C; Gorwood, P; Campion, D; Jay, M; Petit, C; Savoye, C; Bastard, C

    1994-05-01

    In a previous study, we reported a nonrandom segregation between schizophrenia and the pseudoautosomal locus DXYS14 in a sample of 33 sibships. That study has been extended by the addition of 16 new sibships from 16 different families. Data from six other loci of the pseudoautosomal region and of the immediately adjacent part of the X specific region have also been analyzed. Two methods of linkage analysis were used: the affected sibling pair (ASP) method and the lod-score method. Lod-score analyses were performed on the basis of three different models--A, B, and C--all shown to be consistent with the epidemiological data on schizophrenia. No clear evidence for linkage was obtained with any of these models. However, whatever the genetic model and the disease classification, maximum lod scores were positive with most of the markers, with the highest scores generally being obtained for the DXYS14 locus. When the ASP method was used, the earlier finding of nonrandom segregation between schizophrenia and the DXYS14 locus was still supported in this larger data set, at an increased level of statistical significance. Findings of ASP analyses were not significant for the other loci. Thus, findings obtained from analyses using the ASP method, but not the lod-score method, were consistent with the pseudoautosomal hypothesis for schizophrenia.

  20. Accuracy of a composite score using daily SAPS II and LOD scores for predicting hospital mortality in ICU patients hospitalized for more than 72 h.

    Science.gov (United States)

    Timsit, J F; Fosse, J P; Troché, G; De Lassence, A; Alberti, C; Garrouste-Orgeas, M; Azoulay, E; Chevret, S; Moine, P; Cohen, Y

    2001-06-01

    In most databases used to build general severity scores the median duration of intensive care unit (ICU) stay is less than 3 days. Consequently, these scores are not the most appropriate tools for measuring prognosis in studies dealing with ICU patients hospitalized for more than 72 h. To develop a new prognostic model based on a general severity score (SAPS II), an organ dysfunction score (LOD) and evolution of both scores during the first 3 days of ICU stay. Prospective multicenter study. Twenty-eight intensive care units (ICUs) in France. A training data-set was created with four ICUs during an 18-month period (893 patients). Seventy percent of the patients were medical (628) aged 66 years. The median SAPS II was 38. The ICU and hospital mortality rates were 22.7% and 30%, respectively. Forty-seven percent (420 patients) were transferred from hospital wards. In this population, the calibration (Hosmer-Lemeshow chi-square: 37.4, P = 0.001) and the discrimination [area under the ROC curves: 0.744 (95 % CI: 0.714-0.773)] of the original SAPS II were relatively poor. A validation data set was created with a random panel of 24 French ICUs during March 1999 (312 patients). The LOD and SAPS II scores were calculated during the first (SAPS1, LOD1), second (SAPS2, LOD2), and third (SAPS3, LOD3) calendar days. The LOD and SAPS scores alterations were assigned the value "1" when scores increased with time and "0" otherwise. A multivariable logistic regression model was used to select variables measured during the first three calendar days, and independently associated with death. Selected variables were: SAPS II at admission [OR: 1.04 (95 % CI: 1.027-1.053) per point], LOD [OR: 1.16 (95 % CI: 1.085-1.253) per point], transfer from ward [OR: 1.74 (95 % CI: 1.25-2.42)], as well as SAPS3-SAPS2 alterations [OR: 1.516 (95 % CI: 1.04-2.22)], and LOD3-LOD2 alterations [OR: 2.00 (95 % CI: 1.29-3.11)]. The final model has good calibration and discrimination properties in the

  1. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  2. Genome scan for linkage to asthma using a linkage disequilibrium-lod score test.

    Science.gov (United States)

    Jiang, Y; Slager, S L; Huang, J

    2001-01-01

    We report a genome-wide linkage study of asthma on the German and Collaborative Study on the Genetics of Asthma (CSGA) data. Using a combined linkage and linkage disequilibrium test and the nonparametric linkage score, we identified 13 markers from the German data, 1 marker from the African American (CSGA) data, and 7 markers from the Caucasian (CSGA) data in which the p-values ranged between 0.0001 and 0.0100. From our analysis and taking into account previous published linkage studies of asthma, we suggest that three regions in chromosome 5 (around D5S418, D5S644, and D5S422), one region in chromosome 6 (around three neighboring markers D6S1281, D6S291, and D6S1019), one region in chromosome 11 (around D11S2362), and two regions in chromosome 12 (around D12S351 and D12S324) especially merit further investigation.

  3. Hepatic fat quantification using the two-point Dixon method and fat color maps based on non-alcoholic fatty liver disease activity score.

    Science.gov (United States)

    Hayashi, Tatsuya; Saitoh, Satoshi; Takahashi, Junji; Tsuji, Yoshinori; Ikeda, Kenji; Kobayashi, Masahiro; Kawamura, Yusuke; Fujii, Takeshi; Inoue, Masafumi; Miyati, Tosiaki; Kumada, Hiromitsu

    2017-04-01

    The two-point Dixon method for magnetic resonance imaging (MRI) is commonly used to non-invasively measure fat deposition in the liver. The aim of the present study was to assess the usefulness of MRI-fat fraction (MRI-FF) using the two-point Dixon method based on the non-alcoholic fatty liver disease activity score. This retrospective study included 106 patients who underwent liver MRI and MR spectroscopy, and 201 patients who underwent liver MRI and histological assessment. The relationship between MRI-FF and MR spectroscopy-fat fraction was used to estimate the corrected MRI-FF for hepatic multi-peaks of fat. Then, a color FF map was generated with the corrected MRI-FF based on the non-alcoholic fatty liver disease activity score. We defined FF variability as the standard deviation of FF in regions of interest. Uniformity of hepatic fat was visually graded on a three-point scale using both gray-scale and color FF maps. Confounding effects of histology (iron, inflammation and fibrosis) on corrected MRI-FF were assessed by multiple linear regression. The linear correlations between MRI-FF and MR spectroscopy-fat fraction, and between corrected MRI-FF and histological steatosis were strong (R 2  = 0.90 and R 2  = 0.88, respectively). Liver fat variability significantly increased with visual fat uniformity grade using both of the maps (ρ = 0.67-0.69, both P Hepatic iron, inflammation and fibrosis had no significant confounding effects on the corrected MRI-FF (all P > 0.05). The two-point Dixon method and the gray-scale or color FF maps based on the non-alcoholic fatty liver disease activity score were useful for fat quantification in the liver of patients without severe iron deposition. © 2016 The Japan Society of Hepatology.

  4. Clustering patterns of LOD scores for asthma-related phenotypes revealed by a genome-wide screen in 295 French EGEA families.

    Science.gov (United States)

    Bouzigon, Emmanuelle; Dizier, Marie-Hélène; Krähenbühl, Christine; Lemainque, Arnaud; Annesi-Maesano, Isabella; Betard, Christine; Bousquet, Jean; Charpin, Denis; Gormand, Frédéric; Guilloud-Bataille, Michel; Just, Jocelyne; Le Moual, Nicole; Maccario, Jean; Matran, Régis; Neukirch, Françoise; Oryszczyn, Marie-Pierre; Paty, Evelyne; Pin, Isabelle; Rosenberg-Bourgin, Myriam; Vervloet, Daniel; Kauffmann, Francine; Lathrop, Mark; Demenais, Florence

    2004-12-15

    A genome-wide scan for asthma phenotypes was conducted in the whole sample of 295 EGEA families selected through at least one asthmatic subject. In addition to asthma, seven phenotypes involved in the main asthma physiopathological pathways were considered: SPT (positive skin prick test response to at least one of 11 allergens), SPTQ score being the number of positive skin test responses to 11 allergens, Phadiatop (positive specific IgE response to a mixture of allergens), total IgE levels, eosinophils, bronchial responsiveness (BR) to methacholine challenge and %predicted FEV(1). Four regions showed evidence for linkage (PLOD scores. This analysis revealed clustering of LODs for asthma, SPT and Phadiatop on one axis and clustering of LODs for %FEV(1), BR and SPTQ on the other, while LODs for IgE and eosinophils appeared to be independent from all other LODs. These results provide new insights into the potential sharing of genetic determinants by asthma-related phenotypes.

  5. Linkage of familial Alzheimer disease to chromosome 14 in two large early-onset pedigrees: effects of marker allele frequencies on lod scores.

    Science.gov (United States)

    Nechiporuk, A; Fain, P; Kort, E; Nee, L E; Frommelt, E; Polinsky, R J; Korenberg, J R; Pulst, S M

    1993-05-01

    Alzheimer disease (AD) is a devastating neurodegenerative disease leading to global dementia. In addition to sporadic forms of AD, familial forms (FAD) have been recognized. Mutations in the amyloid precursor protein (APP) gene on chromosome (CHR) 21 have been shown to cause early-onset AD in a small number of pedigrees. Recently, linkage to markers on CHR 14 has been established in several early-onset FAD pedigrees. We now report lod scores for CHR 14 markers in two large early-onset FAD pedigrees. Pairwise linkage analysis suggested that in these pedigrees the mutation is tightly linked to the loci D14S43 and D14S53. However, assumptions regarding marker allele frequencies had a major and often unpredictable effect on calculated lod scores. Therefore, caution needs to be exercised when single pedigrees are analyzed with marker allele frequencies determined from the literature or from a pool of spouses.

  6. Using lod-score differences to determine mode of inheritance: a simple, robust method even in the presence of heterogeneity and reduced penetrance.

    Science.gov (United States)

    Greenberg, D A; Berger, B

    1994-10-01

    Determining the mode of inheritance is often difficult under the best of circumstances, but when segregation analysis is used, the problems of ambiguous ascertainment procedures, reduced penetrance, heterogeneity, and misdiagnosis make mode-of-inheritance determinations even more unreliable. The mode of inheritance can also be determined using a linkage-based method (maximized maximum lod score or mod score) and association-based methods, which can overcome many of these problems. In this work, we determined how much information is necessary to reliably determine the mode of inheritance from linkage data when heterogeneity and reduced penetrance are present in the data set. We generated data sets under both dominant and recessive inheritance with reduced penetrance and with varying fractions of linked and unlinked families. We then analyzed those data sets, assuming reduced penetrance, both dominant and recessive inheritance, and no heterogeneity. We investigated the reliability of two methods for determining the mode of inheritance from the linkage data. The first method examined the difference (delta) between the maximum lod scores calculated under the two mode-of-inheritance assumptions. We found that if delta was > 1.5, then the higher of the two maximum lod scores reflected the correct mode of inheritance with high reliability and that a delta of 2.5 appeared to practically guarantee a correct mode-of-inheritance inference. Furthermore, this reliability appeared to be virtually independent of alpha, the fraction of linked families in the data set, although the reliability decreased slightly as alpha fell below .50.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Using lod-score differences to determine mode of inheritance: A simple, robust method even in the presence of heterogeneity and reduced penetrance

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.A.; Berger, B. [Mount Sinai Medical Center, New York, NY (United States)

    1994-10-01

    Determining the mode of inheritance is often difficult under the best of circumstances, but when segregation analysis is used, the problems of ambiguous ascertainment procedures, reduced penetrance, heterogeneity, and misdiagnosis make mode-of-inheritance determinations even more unreliable. The mode of inheritance can also be determined using a linkage-based method and association-based methods, which can overcome many of these problems. In this work, we determined how much information is necessary to reliably determine the mode of inheritance from linkage data when heterogeneity and reduced penetrance are present in the data set. We generated data sets under both dominant and recessive inheritance with reduced penetrance and with varying fractions of linked and unlinked families. We then analyzed those data sets, assuming reduced penetrance, both dominant and recessive inheritance, and no heterogeneity. We investigated the reliability of two methods for determining the mode of inheritance from the linkage data. The first method examined the difference ({Delta}) between the maximum lod scores calculated under the two mode-of-inheritance assumptions. We found that if {Delta} was >1.5, then the higher of the two maximum lod scores reflected the correct mode of inheritance with high reliability and that a {Delta} of 2.5 appeared to practically guarantee a correct mode-of-inheritance inference. Furthermore, this reliability appeared to be virtually independent of {alpha}, the fraction of linked families in the data set. The second method we tested was based on choosing the higher of the two maximum lod scores calculated under the different mode-of-inheritance assumptions. This method became unreliable as {alpha} decreased. These results suggest that the mode of inheritance can be inferred from linkage data with high reliability, even in the presence of heterogeneity and reduced penetrance. 12 refs., 3 figs., 2 tabs.

  8. +2.71 LOD score at zero recombination is not sufficient for establishing linkage between X-linked mental retardation and X-chromosome markers

    Energy Technology Data Exchange (ETDEWEB)

    Robledo, R.; Melis, P.; Siniscalco, M. [and others

    1996-07-12

    Nonspecific X-linked mental retardation (MRX) is the denomination attributed to the familial type of mental retardation compatible with X-linked inheritance but lacking specific phenotypic manifestations. It is thus to be expected that families falling under such broad definition are genetically heterogeneous in the sense that they may be due to different types of mutations occurring, most probably, at distinct X-chromosome loci. To facilitate a genetic classification of these conditions, the Nomenclature Committee of the Eleventh Human Gene Mapping Workshop proposed to assign a unique MRX-serial number to each family where evidence of linkage with one or more X-chromosome markers had been established with a LOD score of at least +2 at zero recombination. This letter is meant to emphasize the inadequacy of this criterion for a large pedigree where the segregation of the disease has been evaluated against the haplotype constitution of the entire X-chromosome carrying the mutation in question. 12 refs., 2 figs., 1 tab.

  9. Two-locus maximum lod score analysis of a multifactorial trait: joint consideration of IDDM2 and IDDM4 with IDDM1 in type 1 diabetes.

    Science.gov (United States)

    Cordell, H J; Todd, J A; Bennett, S T; Kawaguchi, Y; Farrall, M

    1995-10-01

    To investigate the genetic component of multifactorial diseases such as type 1 (insulin-dependent) diabetes mellitus (IDDM), models involving the joint action of several disease loci are important. These models can give increased power to detect an effect and a greater understanding of etiological mechanisms. Here, we present an extension of the maximum lod score method of N. Risch, which allows the simultaneous detection and modeling of two unlinked disease loci. Genetic constraints on the identical-by-descent sharing probabilities, analogous to the "triangle" restrictions in the single-locus method, are derived, and the size and power of the test statistics are investigated. The method is applied to affected-sib-pair data, and the joint effects of IDDM1 (HLA) and IDDM2 (the INS VNTR) and of IDDM1 and IDDM4 (FGF3-linked) are assessed with relation to the development of IDDM. In the presence of genetic heterogeneity, there is seen to be a significant advantage in analyzing more than one locus simultaneously. Analysis of these families indicates that the effects at IDDM1 and IDDM2 are well described by a multiplicative genetic model, while those at IDDM1 and IDDM4 follow a heterogeneity model.

  10. Two-locus maximum lod score analysis of a multifactorial trait: Joint consideration of IDDM2 and IDDM4 with IDDMI in type 1 diabetes

    Energy Technology Data Exchange (ETDEWEB)

    Cordell, H.J.; Todd, J.A.; Bennett, S.T. [Univ. of Oxford (United Kingdom)] [and others

    1995-10-01

    To investigate the genetic component of multifactorial diseases such as type 1 (insulin-dependent) diabetes mellitus (IDDM), models involving the joint action of several disease loci are important. These models can give increased power to detect an effect and a greater understanding of etiological mechanisms. Here, we present an extension of the maximum lod score method of N. Risch, which allows the simultaneous detection and modeling of two unlinked disease loci. Genetic constraints on the identical-by-descent sharing probabilities, analogous to the {open_quotes}triangle{close_quotes} restrictions in the single-locus method, are derived, and the size and power of the test statistics are investigated. The method is applied to affected-sib-pair data, and the joint effects of IDDM1 (HLA) and IDDM2 (the INS VNTR) and of IDDM1 and IDDM4 (FGF3-linked) are assessed with relation to the development of IDDM. In the presence of genetic heterogeneity, there is seen to be a significant advantage in analyzing more than one locus simultaneously. Analysis of these families indicates that the effects at IDDM1 and IDDM2 are well described by a multiplicative genetic model, while those at IDDM1 and IDDM4 follow a heterogeneity model. 17 refs., 9 tabs.

  11. A new method of linkage analysis using LOD scores for quantitative traits supports linkage of monoamine oxidase activity to D17S250 in the Collaborative Study on the Genetics of Alcoholism pedigrees.

    Science.gov (United States)

    Curtis, David; Knight, Jo; Sham, Pak C

    2005-09-01

    Although LOD score methods have been applied to diseases with complex modes of inheritance, linkage analysis of quantitative traits has tended to rely on non-parametric methods based on regression or variance components analysis. Here, we describe a new method for LOD score analysis of quantitative traits which does not require specification of a mode of inheritance. The technique is derived from the MFLINK method for dichotomous traits. A range of plausible transmission models is constructed, constrained to yield the correct population mean and variance for the trait but differing with respect to the contribution to the variance due to the locus under consideration. Maximized LOD scores under homogeneity and admixture are calculated, as is a model-free LOD score which compares the maximized likelihoods under admixture assuming linkage and no linkage. These LOD scores have known asymptotic distributions and hence can be used to provide a statistical test for linkage. The method has been implemented in a program called QMFLINK. It was applied to data sets simulated using a variety of transmission models and to a measure of monoamine oxidase activity in 105 pedigrees from the Collaborative Study on the Genetics of Alcoholism. With the simulated data, the results showed that the new method could detect linkage well if the true allele frequency for the trait was close to that specified. However, it performed poorly on models in which the true allele frequency was much rarer. For the Collaborative Study on the Genetics of Alcoholism data set only a modest overlap was observed between the results obtained from the new method and those obtained when the same data were analysed previously using regression and variance components analysis. Of interest is that D17S250 produced a maximized LOD score under homogeneity and admixture of 2.6 but did not indicate linkage using the previous methods. However, this region did produce evidence for linkage in a separate data set

  12. Automatic LOD selection

    OpenAIRE

    Forsman, Isabelle

    2017-01-01

    In this paper a method to automatically generate transition distances for LOD, improving image stability and performance is presented. Three different methods were tested all measuring the change between two level of details using the spatial frequency. The methods were implemented as an optional pre-processing step in order to determine the transition distances from multiple view directions. During run-time both view direction based selection and the furthest distance for each direction was ...

  13. LOD estimation from DORIS observations

    Science.gov (United States)

    Stepanek, Petr; Filler, Vratislav; Buday, Michal; Hugentobler, Urs

    2016-04-01

    The difference between astronomically determined duration of the day and 86400 seconds is called length of day (LOD). The LOD could be also understood as the daily rate of the difference between the Universal Time UT1, based on the Earth rotation, and the International Atomic Time TAI. The LOD is estimated using various Satellite Geodesy techniques as GNSS and SLR, while absolute UT1-TAI difference is precisely determined by VLBI. Contrary to other IERS techniques, the LOD estimation using DORIS (Doppler Orbitography and Radiopositioning Integrated by satellite) measurement did not achieve a geodetic accuracy in the past, reaching the precision at the level of several ms per day. However, recent experiments performed by IDS (International DORIS Service) analysis centre at Geodetic Observatory Pecny show a possibility to reach accuracy around 0.1 ms per day, when not adjusting the cross-track harmonics in the Satellite orbit model. The paper presents the long term LOD series determined from the DORIS solutions. The series are compared with C04 as the reference. Results are discussed in the context of accuracy achieved with GNSS and SLR. Besides the multi-satellite DORIS solutions, also the LOD series from the individual DORIS satellite solutions are analysed.

  14. Two-point model for divertor transport

    International Nuclear Information System (INIS)

    Galambos, J.D.; Peng, Y.K.M.

    1984-04-01

    Plasma transport along divertor field lines was investigated using a two-point model. This treatment requires considerably less effort to find solutions to the transport equations than previously used one-dimensional (1-D) models and is useful for studying general trends. It also can be a valuable tool for benchmarking more sophisticated models. The model was used to investigate the possibility of operating in the so-called high density, low temperature regime

  15. Meteorological interpretation of transient LOD changes

    Science.gov (United States)

    Masaki, Y.

    2008-04-01

    The Earth’s spin rate is mainly changed by zonal winds. For example, seasonal changes in global atmospheric circulation and episodic changes accompanied with El Nĩ os are clearly detected n in the Length-of-day (LOD). Sub-global to regional meteorological phenomena can also change the wind field, however, their effects on the LOD are uncertain because such LOD signals are expected to be subtle and transient. In our previous study (Masaki, 2006), we introduced atmospheric pressure gradients in the upper atmosphere in order to obtain a rough picture of the meteorological features that can change the LOD. In this presentation, we compare one-year LOD data with meteorological elements (winds, temperature, pressure, etc.) and make an attempt to link transient LOD changes with sub-global meteorological phenomena.

  16. Pragmatic Use of LOD - a Modular Approach

    DEFF Research Database (Denmark)

    Treldal, Niels; Vestergaard, Flemming; Karlshøj, Jan

    and reliability of deliveries along with use-case-specific information requirements provides a pragmatic approach for a LOD concept. The proposed solution combines LOD requirement definitions with Information Delivery Manual-based use case requirements to match the specific needs identified for a LOD framework......The concept of Level of Development (LOD) is a simple approach to specifying the requirements for the content of object-oriented models in a Building Information Modelling process. The concept has been implemented in many national and organization-specific variations and, in recent years, several...

  17. LOD-a-lot : A queryable dump of the LOD cloud

    NARCIS (Netherlands)

    Fernández, Javier D.; Beek, Wouter; Martínez-Prieto, Miguel A.; Arias, Mario

    2017-01-01

    LOD-a-lot democratizes access to the Linked Open Data (LOD) Cloud by serving more than 28 billion unique triples from 650, K datasets over a single self-indexed file. This corpus can be queried online with a sustainable Linked Data Fragments interface, or downloaded and consumed locally: LOD-a-lot

  18. LOD wars: The affected-sib-pair paradigm strikes back!

    Energy Technology Data Exchange (ETDEWEB)

    Farrall, M. [Wellcome Trust Centre for Human Genetics, Oxford (United Kingdom)

    1997-03-01

    In a recent letter, Greenberg et al. aired their concerns that the affected-sib-pair (ASP) approach was becoming excessively popular, owing to misconceptions and ignorance of the properties and limitations of both the ASP and the classic LOD-score approaches. As an enthusiast of using the ASP approach to map susceptibility genes for multifactorial traits, I would like to contribute a few comments and explanatory notes in defense of the ASP paradigm. 18 refs.

  19. Systematic effects in LOD from SLR observations

    Science.gov (United States)

    Bloßfeld, Mathis; Gerstl, Michael; Hugentobler, Urs; Angermann, Detlef; Müller, Horst

    2014-09-01

    Beside the estimation of station coordinates and the Earth’s gravity field, laser ranging observations to near-Earth satellites can be used to determine the rotation of the Earth. One parameter of this rotation is ΔLOD (excess Length Of Day) which describes the excess revolution time of the Earth w.r.t. 86,400 s. Due to correlations among the different parameter groups, it is difficult to obtain reliable estimates for all parameters. In the official ΔLOD products of the International Earth Rotation and Reference Systems Service (IERS), the ΔLOD information determined from laser ranging observations is excluded from the processing. In this paper, we study the existing correlations between ΔLOD, the orbital node Ω, the even zonal gravity field coefficients, cross-track empirical accelerations and relativistic accelerations caused by the Lense-Thirring and deSitter effect in detail using first order Gaussian perturbation equations. We found discrepancies due to different a priories by using different gravity field models of up to 1.0 ms for polar orbits at an altitude of 500 km and up to 40.0 ms, if the gravity field coefficients are estimated using only observations to LAGEOS 1. If observations to LAGEOS 2 are included, reliable ΔLOD estimates can be achieved. Nevertheless, an impact of the a priori gravity field even on the multi-satellite ΔLOD estimates can be clearly identified. Furthermore, we investigate the effect of empirical cross-track accelerations and the effect of relativistic accelerations of near-Earth satellites on ΔLOD. A total effect of 0.0088 ms is caused by not modeled Lense-Thirring and deSitter terms. The partial derivatives of these accelerations w.r.t. the position and velocity of the satellite cause very small variations (0.1 μs) on ΔLOD.

  20. MCMC multilocus lod scores: application of a new approach.

    Science.gov (United States)

    George, Andrew W; Wijsman, Ellen M; Thompson, Elizabeth A

    2005-01-01

    On extended pedigrees with extensive missing data, the calculation of multilocus likelihoods for linkage analysis is often beyond the computational bounds of exact methods. Growing interest therefore surrounds the implementation of Monte Carlo estimation methods. In this paper, we demonstrate the speed and accuracy of a new Markov chain Monte Carlo method for the estimation of linkage likelihoods through an analysis of real data from a study of early-onset Alzheimer's disease. For those data sets where comparison with exact analysis is possible, we achieved up to a 100-fold increase in speed. Our approach is implemented in the program lm_bayes within the framework of the freely available MORGAN 2.6 package for Monte Carlo genetic analysis (http://www.stat.washington.edu/thompson/Genepi/MORGAN/Morgan.shtml).

  1. R-LODs: fast LOD-based ray tracing of massive models

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Sung-Eui; Lauterbach, Christian; Manocha, Dinesh

    2006-08-25

    We present a novel LOD (level-of-detail) algorithm to accelerate ray tracing of massive models. Our approach computes drastic simplifications of the model and the LODs are well integrated with the kd-tree data structure. We introduce a simple and efficient LOD metric to bound the error for primary and secondary rays. The LOD representation has small runtime overhead and our algorithm can be combined with ray coherence techniques and cache-coherent layouts to improve the performance. In practice, the use of LODs can alleviate aliasing artifacts and improve memory coherence. We implement our algorithm on both 32bit and 64bit machines and able to achieve up to 2.20 times improvement in frame rate of rendering models consisting of tens or hundreds of millions of triangles with little loss in image quality.

  2. Comparison of pressure perception of static and dynamic two point ...

    African Journals Online (AJOL)

    ... the right and left index finger (p<0.05). Conclusion: Age and gender did not affect the perception of static and dynamic two point discrimination while the limb side (left or right) affected the perception of static and dynamic two point discrimination. The index finger is also more sensitive to moving rather static sensations.

  3. Frank: The LOD cloud at your fingertips?

    NARCIS (Netherlands)

    Beek, Wouter; Rietveld, Laurens

    2015-01-01

    Large-scale, algorithmic access to LOD Cloud data has been hampered by the absence of queryable endpoints for many datasets, a plethora of serialization formats, and an abundance of idiosyncrasies such as syntax errors. As of late, very large-scale — hundreds of thousands of document, tens of

  4. Frank : The LOD cloud at your fingertips?

    NARCIS (Netherlands)

    Beek, Wouter; Rietveld, Laurens

    2015-01-01

    Large-scale, algorithmic access to LOD Cloud data has been hampered by the absence of queryable endpoints for many datasets, a plethora of serialization formats, and an abundance of idiosyncrasies such as syntax errors. As of late, very large-scale - hundreds of thousands of document, tens of

  5. LOD lab : Scalable linked data processing

    NARCIS (Netherlands)

    Beek, Wouter; Rietveld, Laurens; Ilievski, F.; Schlobach, Stefan

    2017-01-01

    With tens if not hundreds of billions of logical statements, the Linked Open Data (LOD) is one of the biggest knowledge bases ever built. As such it is a gigantic source of information for applications in various domains, but also given its size an ideal test-bed for knowledge representation and

  6. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  7. LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance

    Science.gov (United States)

    Ellul, C.; Altenbuchner, J.

    2013-09-01

    The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.

  8. Two-point entanglement near a quantum phase transition

    International Nuclear Information System (INIS)

    Chen, Han-Dong

    2007-01-01

    In this work, we study the two-point entanglement S(i, j), which measures the entanglement between two separated degrees of freedom (ij) and the rest of system, near a quantum phase transition. Away from the critical point, S(i, j) saturates with a characteristic length scale ξ E , as the distance |i - j| increases. The entanglement length ξ E agrees with the correlation length. The universality and finite size scaling of entanglement are demonstrated in a class of exactly solvable one-dimensional spin model. By connecting the two-point entanglement to correlation functions in the long range limit, we argue that the prediction power of a two-point entanglement is universal as long as the two involved points are separated far enough

  9. Two-Point Codes for the Generalised GK curve

    DEFF Research Database (Denmark)

    Barelli, Élise; Beelen, Peter; Datta, Mrinmoy

    2017-01-01

    completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We......We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti–Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti–Korchmaros curve (GK). Our results...

  10. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    International Nuclear Information System (INIS)

    Marcori, Oton H.; Pereira, Thiago S.

    2017-01-01

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation function in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.

  11. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    Energy Technology Data Exchange (ETDEWEB)

    Marcori, Oton H.; Pereira, Thiago S., E-mail: otonhm@hotmail.com, E-mail: tspereira@uel.br [Departamento de Física, Universidade Estadual de Londrina, 86057-970, Londrina PR (Brazil)

    2017-02-01

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation function in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.

  12. Quantum electrodynamics and light rays. [Two-point correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sudarshan, E.C.G.

    1978-11-01

    Light is a quantum electrodynamic entity and hence bundles of rays must be describable in this framework. The duality in the description of elementary optical phenomena is demonstrated in terms of two-point correlation functions and in terms of collections of light rays. The generalizations necessary to deal with two-slit interference and diffraction by a rectangular slit are worked out and the usefulness of the notion of rays of darkness illustrated. 10 references.

  13. Geometric convergence of some two-point Pade approximations

    International Nuclear Information System (INIS)

    Nemeth, G.

    1983-01-01

    The geometric convergences of some two-point Pade approximations are investigated on the real positive axis and on certain infinite sets of the complex plane. Some theorems concerning the geometric convergence of Pade approximations are proved, and bounds on geometric convergence rates are given. The results may be interesting considering the applications both in numerical computations and in approximation theory. As a specific case, the numerical calculations connected with the plasma dispersion function may be performed. (D.Gy.)

  14. A two-point kinetic model for the PROTEUS reactor

    International Nuclear Information System (INIS)

    Dam, H. van.

    1995-03-01

    A two-point reactor kinetic model for the PROTEUS-reactor is developed and the results are described in terms of frequency dependent reactivity transfer functions for the core and the reflector. It is shown that at higher frequencies space-dependent effects occur which imply failure of the one-point kinetic model. In the modulus of the transfer functions these effects become apparent above a radian frequency of about 100 s -1 , whereas for the phase behaviour the deviation from a point model already starts at a radian frequency of 10 s -1 . (orig.)

  15. Second feature of the matter two-point function

    Science.gov (United States)

    Tansella, Vittorio

    2018-05-01

    We point out the existence of a second feature in the matter two-point function, besides the acoustic peak, due to the baryon-baryon correlation in the early Universe and positioned at twice the distance of the peak. We discuss how the existence of this feature is implied by the well-known heuristic argument that explains the baryon bump in the correlation function. A standard χ2 analysis to estimate the detection significance of the second feature is mimicked. We conclude that, for realistic values of the baryon density, a SKA-like galaxy survey will not be able to detect this feature with standard correlation function analysis.

  16. Two-point density correlations of quasicondensates in free expansion

    DEFF Research Database (Denmark)

    Manz, S.; Bücker, R.; Betz, T.

    2010-01-01

    We measure the two-point density correlation function of freely expanding quasicondensates in the weakly interacting quasi-one-dimensional (1D) regime. While initially suppressed in the trap, density fluctuations emerge gradually during expansion as a result of initial phase fluctuations present...... in the trapped quasicondensate. Asymptotically, they are governed by the thermal coherence length of the system. Our measurements take place in an intermediate regime where density correlations are related to near-field diffraction effects and anomalous correlations play an important role. Comparison...

  17. The massless two-loop two-point function

    International Nuclear Information System (INIS)

    Bierenbaum, I.; Weinzierl, S.

    2003-01-01

    We consider the massless two-loop two-point function with arbitrary powers of the propagators and derive a representation from which we can obtain the Laurent expansion to any desired order in the dimensional regularization parameter ε. As a side product, we show that in the Laurent expansion of the two-loop integral only rational numbers and multiple zeta values occur. Our method of calculation obtains the two-loop integral as a convolution product of two primitive one-loop integrals. We comment on the generalization of this product structure to higher loop integrals. (orig.)

  18. Meta-data for a lot of LOD

    NARCIS (Netherlands)

    Rietveld, Laurens; Beek, Wouter; Hoekstra, Rinke; Schlobach, Stefan

    2017-01-01

    This paper introduces the LOD Laundromat meta-dataset, a continuously updated RDF meta-dataset that describes the documents crawled, cleaned and (re)published by the LOD Laundromat. This meta-dataset of over 110 million triples contains structural information for more than 650,000 documents (and

  19. The dynamic system corresponding to LOD and AAM.

    Science.gov (United States)

    Liu, Shida; Liu, Shikuo; Chen, Jiong

    2000-02-01

    Using wavelet transform, the authors can reconstruct the 1-D map of a multifractal object. The wavelet transform of LOD and AAM shows that at 20 years scale, annual scale and 2 - 3 years scale, the jump points of LOD and AAM accord with each other very well, and their reconstructing 1-D mapping dynamic system are also very similar.

  20. Interaction between two point-like charges in nonlinear electrostatics

    Energy Technology Data Exchange (ETDEWEB)

    Breev, A.I. [Tomsk State University, Tomsk (Russian Federation); Tomsk Polytechnic University, Tomsk (Russian Federation); Shabad, A.E. [P.N. Lebedev Physical Institute, Moscow (Russian Federation); Tomsk State University, Tomsk (Russian Federation)

    2018-01-15

    We consider two point-like charges in electrostatic interaction within the framework of a nonlinear model, associated with QED, that provides finiteness of their field energy. We find the common field of the two charges in a dipole-like approximation, where the separation between them R is much smaller than the observation distance r: with the linear accuracy with respect to the ratio R/r, and in the opposite approximation, where R >> r, up to the term quadratic in the ratio r/R. The consideration proposes the law a + bR{sup 1/3} for the energy, when the charges are close to one another, R → 0. This leads to the singularity of the force between them to be R{sup -2/3}, which is weaker than the Coulomb law, R{sup -2}. (orig.)

  1. Interaction between two point-like charges in nonlinear electrostatics

    Science.gov (United States)

    Breev, A. I.; Shabad, A. E.

    2018-01-01

    We consider two point-like charges in electrostatic interaction within the framework of a nonlinear model, associated with QED, that provides finiteness of their field energy. We find the common field of the two charges in a dipole-like approximation, where the separation between them R is much smaller than the observation distance r : with the linear accuracy with respect to the ratio R / r, and in the opposite approximation, where R≫ r, up to the term quadratic in the ratio r / R. The consideration proposes the law a+b R^{1/3} for the energy, when the charges are close to one another, R→ 0. This leads to the singularity of the force between them to be R^{-2/3}, which is weaker than the Coulomb law, R^{-2}.

  2. Fast and accurate computation of projected two-point functions

    Science.gov (United States)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  3. Two-point functions in (loop) quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Gianluca; Oriti, Daniele [Max-Planck-Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany); Gielen, Steffen [Max-Planck-Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany); DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2011-07-01

    We discuss the path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions, with particular but non-exclusive reference to loop quantum cosmology (LQC). Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  4. Two-point functions in (loop) quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Gianluca; Gielen, Steffen; Oriti, Daniele, E-mail: calcagni@aei.mpg.de, E-mail: gielen@aei.mpg.de, E-mail: doriti@aei.mpg.de [Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany)

    2011-06-21

    The path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions is discussed, with particular but non-exclusive reference to loop quantum cosmology. Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  5. Two-point functions in (loop) quantum cosmology

    International Nuclear Information System (INIS)

    Calcagni, Gianluca; Gielen, Steffen; Oriti, Daniele

    2011-01-01

    The path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions is discussed, with particular but non-exclusive reference to loop quantum cosmology. Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  6. Two-point model for electron transport in EBT

    International Nuclear Information System (INIS)

    Chiu, S.C.; Guest, G.E.

    1980-01-01

    The electron transport in EBT is simulated by a two-point model corresponding to the central plasma and the edge. The central plasma is assumed to obey neoclassical collisionless transport. The edge plasma is assumed turbulent and modeled by Bohm diffusion. The steady-state temperatures and densities in both regions are obtained as functions of neutral influx and microwave power. It is found that as the neutral influx decreases and power increases, the edge density decreases while the core density increases. We conclude that if ring instability is responsible for the T-M mode transition, and if stability is correlated with cold electron density at the edge, it will depend sensitively on ambient gas pressure and microwave power

  7. Two-point correlation function for Dirichlet L-functions

    Science.gov (United States)

    Bogomolny, E.; Keating, J. P.

    2013-03-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.

  8. Two-point correlation function for Dirichlet L-functions

    International Nuclear Information System (INIS)

    Bogomolny, E; Keating, J P

    2013-01-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy–Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question. (paper)

  9. Application of LOD Technology in German Libraries and Archives

    Directory of Open Access Journals (Sweden)

    Dong Jie

    2017-12-01

    Full Text Available [Purpose/significance] Linked Open Data (LOD has been widely used in large industries, as well as non-profit organizations and government organizations. Libraries and archives are ones of the early adopters of LOD technology. Libraries and archives promote the development of LOD. Germany is one of the developed countries in the libraries and archives industry, and there are many successful cases about the application of LOD in the libraries and archives. [Method/process] This paper analyzed the successful application of LOD technology in German libraries and archives by using the methods of document investigation, network survey and content analysis. [Result/conclusion] These cases reveal in the traditional field of computer science the relationship among research topics related to libraries and archives such as artificial intelligence, database and knowledge discovery. Summing up the characteristics and experience of German practice can provide more reference value for the development of relevant practice in China.

  10. Two-point boundary correlation functions of dense loop models

    Directory of Open Access Journals (Sweden)

    Alexi Morin-Duchesne, Jesper Lykke Jacobsen

    2018-06-01

    Full Text Available We investigate six types of two-point boundary correlation functions in the dense loop model. These are defined as ratios $Z/Z^0$ of partition functions on the $m\\times n$ square lattice, with the boundary condition for $Z$ depending on two points $x$ and $y$. We consider: the insertion of an isolated defect (a and a pair of defects (b in a Dirichlet boundary condition, the transition (c between Dirichlet and Neumann boundary conditions, and the connectivity of clusters (d, loops (e and boundary segments (f in a Neumann boundary condition. For the model of critical dense polymers, corresponding to a vanishing loop weight ($\\beta = 0$, we find determinant and pfaffian expressions for these correlators. We extract the conformal weights of the underlying conformal fields and find $\\Delta = -\\frac18$, $0$, $-\\frac3{32}$, $\\frac38$, $1$, $\\tfrac \\theta \\pi (1+\\tfrac{2\\theta}\\pi$, where $\\theta$ encodes the weight of one class of loops for the correlator of type f. These results are obtained by analysing the asymptotics of the exact expressions, and by using the Cardy-Peschel formula in the case where $x$ and $y$ are set to the corners. For type b, we find a $\\log|x-y|$ dependence from the asymptotics, and a $\\ln (\\ln n$ term in the corner free energy. This is consistent with the interpretation of the boundary condition of type b as the insertion of a logarithmic field belonging to a rank two Jordan cell. For the other values of $\\beta = 2 \\cos \\lambda$, we use the hypothesis of conformal invariance to predict the conformal weights and find $\\Delta = \\Delta_{1,2}$, $\\Delta_{1,3}$, $\\Delta_{0,\\frac12}$, $\\Delta_{1,0}$, $\\Delta_{1,-1}$ and $\\Delta_{\\frac{2\\theta}\\lambda+1,\\frac{2\\theta}\\lambda+1}$, extending the results of critical dense polymers. With the results for type f, we reproduce a Coulomb gas prediction for the valence bond entanglement entropy of Jacobsen and Saleur.

  11. Flow speed measurement using two-point collective light scattering

    International Nuclear Information System (INIS)

    Heinemeier, N.P.

    1998-09-01

    Measurements of turbulence in plasmas and fluids using the technique of collective light scattering have always been plagued by very poor spatial resolution. In 1994, a novel two-point collective light scattering system for the measurement of transport in a fusion plasma was proposed. This diagnostic method was design for a great improvement of the spatial resolution, without sacrificing accuracy in the velocity measurement. The system was installed at the W7-AS steallartor in Garching, Germany, in 1996, and has been operating since. This master thesis is an investigation of the possible application of this new method to the measurement of flow speeds in normal fluids, in particular air, although the results presented in this work have significance for the plasma measurements as well. The main goal of the project was the experimental verification of previous theoretical predictions. However, the theoretical considerations presented in the thesis show that the method can only be hoped to work for flows that are almost laminar and shearless, which makes it of very small practical interest. Furthermore, this result also implies that the diagnostic at W7-AS cannot be expected to give the results originally hoped for. (au)

  12. Flow speed measurement using two-point collective light scattering

    Energy Technology Data Exchange (ETDEWEB)

    Heinemeier, N.P

    1998-09-01

    Measurements of turbulence in plasmas and fluids using the technique of collective light scattering have always been plagued by very poor spatial resolution. In 1994, a novel two-point collective light scattering system for the measurement of transport in a fusion plasma was proposed. This diagnostic method was design for a great improvement of the spatial resolution, without sacrificing accuracy in the velocity measurement. The system was installed at the W7-AS steallartor in Garching, Germany, in 1996, and has been operating since. This master thesis is an investigation of the possible application of this new method to the measurement of flow speeds in normal fluids, in particular air, although the results presented in this work have significance for the plasma measurements as well. The main goal of the project was the experimental verification of previous theoretical predictions. However, the theoretical considerations presented in the thesis show that the method can only be hoped to work for flows that are almost laminar and shearless, which makes it of very small practical interest. Furthermore, this result also implies that the diagnostic at W7-AS cannot be expected to give the results originally hoped for. (au) 1 tab., 51 ills., 29 refs.

  13. Two-point functions in a holographic Kondo model

    Science.gov (United States)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.

    2017-03-01

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i2, which is characteristic of a Kondo resonance.

  14. Two-point functions in a holographic Kondo model

    Energy Technology Data Exchange (ETDEWEB)

    Erdmenger, Johanna [Institut für Theoretische Physik und Astrophysik, Julius-Maximilians-Universität Würzburg,Am Hubland, D-97074 Würzburg (Germany); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 Munich (Germany); Hoyos, Carlos [Department of Physics, Universidad de Oviedo, Avda. Calvo Sotelo 18, 33007, Oviedo (Spain); O’Bannon, Andy [STAG Research Centre, Physics and Astronomy, University of Southampton,Highfield, Southampton SO17 1BJ (United Kingdom); Papadimitriou, Ioannis [SISSA and INFN - Sezione di Trieste, Via Bonomea 265, I 34136 Trieste (Italy); Probst, Jonas [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Wu, Jackson M.S. [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)

    2017-03-07

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0+1)-dimensional impurity spin of a gauged SU(N) interacting with a (1+1)-dimensional, large-N, strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU(N)-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O{sup †}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1+1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0+1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green’s function of the form −i〈O〉{sup 2}, which is characteristic of a Kondo resonance.

  15. GOOF: OCTOPUS error messages, ORDER, ORDERLIB, FLOE, CHAT, and LOD

    Energy Technology Data Exchange (ETDEWEB)

    Whitten, G.

    1977-07-10

    This is a compilation of the error messages returned by five parts of the Livermore timesharing system: the ORDER batch-processor, the ORDERLIB subroutine library, the FLOE operating system, the CHAT compiler, and the LOD loader.

  16. Enhanced LOD Concepts for Virtual 3d City Models

    Science.gov (United States)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  17. El Nino, La Nina and VLBI Measured LOD

    Science.gov (United States)

    Clark, Thomas A.; Gipson, J. M.; Ma, C.

    1998-01-01

    VLBI is one of the most important techniques for measuring Earth orientation parameters (EOP), and is unique in its ability to make high accuracy measurements of UT1, and its time derivative, which is related to changes in the length of day, conventionally called LOD. These measurements of EOP give constraints on geophysical models of the solid-Earth, atmosphere and oceans. Changes in EOP are due either to external torques from gravitational forces, or to the exchange of angular momentum between the Earth, atmosphere and oceans. The effect of the external torques is strictly harmonic and nature, and is therefore easy to remove. We analyze an LOD time series derived from VLBI measurements with the goal of comparing this to predictions from AAM, and various ENSO indices. Previous work by ourselves and other investigators demonstrated a high degree of coherence between atmospheric angular momentum (AAM) and EOP. We continue to see this. As the angular momentum of the atmosphere increases, the rate of rotation of the Earth decreases, and vice versa. The signature of the ENSO is particularly strong. At the peak of the 1982-83 El Nino increased LOD by almost 1 ms. This was subsequently followed by a reduction in LOD of 0.75 ms. At its peak, in February of 1998, the 1997-98 El Nino increased LOD by 0.8 msec. As predicted at the 1998 Spring AGU, this has been followed by an abrupt decrease in LOD which is currently -0.4 ms. At this time (August, 1998) the current ENSO continues to develop in new and unexpected ways. We plan to update our analysis with all data available prior to the Fall AGU.

  18. Secular change of LOD caused by core evolution

    Science.gov (United States)

    Denis, C.; Rybicki, K. R.; Varga, P.

    2003-04-01

    Fossils and tidal deposits suggest that, on the average, the Earth's despinning rate had been five times less in the Proterozoic than in the Phanerozoic. This difference is probably due, for the major part, to the existence of a Proterozoic supercontinent. Nevertheless, core formation and core evolution should have compensated to some extent the effect of tidal friction, by diminishing the Earth's inertia moment. We have investigated quantitatively this contribution of the evolving core to the change of LOD. For the present epoch, we find that the solidification of the inner core causes a relative secular decrease of LOD of approximately 3 μs per century, whereas the macrodiffusion of iron oxides and sulfides from the D" into the outer core across the CMB (inasfar as Majewski's theory holds) leads to a relative secular decrease of LOD by about 15 μs per century. On the other hand, the theory of slow core formation developped by Runcorn in the early 1960s as a by-product of his theory of mantle-wide convection, leads to a relative secular decrease of LOD during most of the Proterozoic of about 0.25 ms per century. Although core formation is now widely assumed to have been a thermal run-away process that occurred shortly after the Earth itself had formed, Runcorn's theory of the growing core would nicely explain the observed palaeo-LOD curve. In any case, formation of the core implies, all in all, a relative decrease of LOD of typically 3 hours.

  19. Quadtree of TIN: a new algorithm of dynamic LOD

    Science.gov (United States)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  20. LOD Laundromat : Why the Semantic Web needs centralization (even if we don't like it)

    NARCIS (Netherlands)

    Beek, Wouter; Rietveld, Laurens; Schlobach, Stefan; van Harmelen, Frank

    2016-01-01

    LOD Laundromat poses a centralized solution for today's Semantic Web problems. This approach adheres more closely to the original vision of a Web of Data, providing uniform access to a large and ever-increasing subcollection of the LOD Cloud.

  1. The research of selection model based on LOD in multi-scale display of electronic map

    Science.gov (United States)

    Zhang, Jinming; You, Xiong; Liu, Yingzhen

    2008-10-01

    This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.

  2. LOD map--A visual interface for navigating multiresolution volume visualization.

    Science.gov (United States)

    Wang, Chaoli; Shen, Han-Wei

    2006-01-01

    In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.

  3. The Partition of Multi-Resolution LOD Based on Qtm

    Science.gov (United States)

    Hou, M.-L.; Xing, H.-Q.; Zhao, X.-S.; Chen, J.

    2011-08-01

    The partition hierarch of Quaternary Triangular Mesh (QTM) determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details) based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.

  4. THE PARTITION OF MULTI-RESOLUTION LOD BASED ON QTM

    Directory of Open Access Journals (Sweden)

    M.-L. Hou

    2012-08-01

    Full Text Available The partition hierarch of Quaternary Triangular Mesh (QTM determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.

  5. LOD First Estimates In 7406 SLR San Juan Argentina Station

    Science.gov (United States)

    Pacheco, A.; Podestá, R.; Yin, Z.; Adarvez, S.; Liu, W.; Zhao, L.; Alvis Rojas, H.; Actis, E.; Quinteros, J.; Alacoria, J.

    2015-10-01

    In this paper we show results derived from satellite observations at the San Juan SLR station of Felix Aguilar Astronomical Observatory (OAFA). The Satellite Laser Ranging (SLR) telescope was installed in early 2006, in accordance with an international cooperation agreement between the San Juan National University (UNSJ) and the Chinese Academy of Sciences (CAS). The SLR has been in successful operation since 2011 using NAOC SLR software for the data processing. This program was designed to calculate satellite orbits and station coordinates, however it was used in this work for the determination of LOD (Length Of Day) time series and Earth Rotation speed.

  6. Towards an Editable, Versionized LOD Service for Library Data

    Directory of Open Access Journals (Sweden)

    Felix Ostrowski

    2013-02-01

    Full Text Available The Northrhine-Westphalian Library Service Center (hbz launched its LOD service lobid.org in August 2010 and has since then continuously been improving the underlying conversion processes, data models and software. The present paper first explains the background and motivation for developing lobid.org . It then describes the underlying software framework Phresnel which is written in PHP and which provides presentation and editing capabilities of RDF data based on the Fresnel Display Vocabulary for RDF. The paper gives an overview of the current state of the Phresnel development and discusses the technical challenges encountered. Finally, possible prospects for further developing Phresnel are outlined.

  7. Office microlaparoscopic ovarian drilling (OMLOD) versus conventional laparoscopic ovarian drilling (LOD) for women with polycystic ovary syndrome.

    Science.gov (United States)

    Salah, Imaduldin M

    2013-02-01

    This was a prospective controlled study to compare the beneficial effects of office microlaparoscopic ovarian drilling (OMLOD) under augmented local anesthesia, as a new modality treatment option, compared to those following ovarian drilling with the conventional traditional 10-mm laparoscope (laparoscopic ovarian drilling, LOD) under general anesthesia. The study included 60 anovulatory women with polycystic ovary syndrome (PCOS) who underwent OMLOD (study group) and 60 anovulatory PCOS women, in whom conventional LOD using 10-mm laparoscope under general anesthesia was performed (comparison group). Transvaginal ultrasound scan and blood sampling to measure the serum concentrations of LH, FSH, testosterone and androstenedione were performed before and after the procedure. Intraoperative and postoperative pain scores in candidate women were evaluated during the office microlaparoscopic procedure, in addition to the number of candidates who needed extra analgesia. Women undergoing OMLOD showed good intraoperative and postoperative pain scores. The number of patients discharged within 2 h after the office procedure was significantly higher, without the need for postoperative analgesia in most patients. The LH:FSH ratio, mean serum concentrations of LH and testosterone and free androgen index decreased significantly after both OMLOD and LOD. The mean ovarian volume decreased significantly (P < 0.05) a year after both OMLOD and LOD. There were no significant differences in those results after both procedures. Intra- and postoperatively augmented local anesthesia allows outpatient bilateral ovarian drilling by microlaparoscopy without general anesthesia. The high pregnancy rate, the simplicity of the method and the faster discharge time offer a new option for patients with PCOS who are resistant to clomiphene citrate. Moreover, ovarian drilling could be performed simultaneously during the routine diagnostic microlaparoscopy and integrated into the fertility workup of

  8. Three- and two-point one-loop integrals in heavy particle effective theories

    International Nuclear Information System (INIS)

    Bouzas, A.O.

    2000-01-01

    We give a complete analytical computation of three- and two-point loop integrals occurring in heavy particle theories, involving a velocity change, for arbitrary real values of the external masses and residual momenta. (orig.)

  9. Some exact results for the two-point function of an integrable quantum field theory

    International Nuclear Information System (INIS)

    Creamer, D.B.; Thacker, H.B.; Wilkinson, D.

    1981-02-01

    The two point correlation function for the quantum nonlinear Schroedinger (delta-function gas) model is studied. An infinite series representation for this function is derived using the quantum inverse scattering formalism. For the case of zero temperature, the infinite coupling (c → infinity) result of Jimbo, Miwa, Mori and Sato is extended to give an exact expression for the order 1/c correction to the two point function in terms of a Painleve transcendent of the fifth kind

  10. Solving fuzzy two-point boundary value problem using fuzzy Laplace transform

    OpenAIRE

    Ahmad, Latif; Farooq, Muhammad; Ullah, Saif; Abdullah, Saleem

    2014-01-01

    A natural way to model dynamic systems under uncertainty is to use fuzzy boundary value problems (FBVPs) and related uncertain systems. In this paper we use fuzzy Laplace transform to find the solution of two-point boundary value under generalized Hukuhara differentiability. We illustrate the method for the solution of the well known two-point boundary value problem Schrodinger equation, and homogeneous boundary value problem. Consequently, we investigate the solutions of FBVPs under as a ne...

  11. Weighted combination of LOD values oa splitted into frequency windows

    Science.gov (United States)

    Fernandez, L. I.; Gambis, D.; Arias, E. F.

    In this analysis a one-day combined time series of LOD(length-of-day) estimates is presented. We use individual data series derived by 7 GPS and 3 SLR analysis centers, which routinely contribute to the IERS database over a recent 27-month period (Jul 1996 - Oct 1998). The result is compared to the multi-technique combined series C04 produced by the Central Bureau of the IERS that is commonly used as a reference for the study of the phenomena of Earth rotation variations. The Frequency Windows Combined Series procedure brings out a time series, which is close to C04 but shows an amplitude difference that might explain the evident periodic behavior present in the differences of these two combined series. This method could be useful to generate a new time series to be used as a reference in the high frequency variations of the Earth rotation studies.

  12. Mistakes and Pitfalls Associated with Two-Point Compression Ultrasound for Deep Vein Thrombosis

    Directory of Open Access Journals (Sweden)

    Tony Zitek, MD

    2016-03-01

    Full Text Available Introduction: Two-point compression ultrasound is purportedly a simple and accurate means to diagnose proximal lower extremity deep vein thrombosis (DVT, but the pitfalls of this technique have not been fully elucidated. The objective of this study is to determine the accuracy of emergency medicine resident-performed two-point compression ultrasound, and to determine what technical errors are commonly made by novice ultrasonographers using this technique. Methods: This was a prospective diagnostic test assessment of a convenience sample of adult emergency department (ED patients suspected of having a lower extremity DVT. After brief training on the technique, residents performed two-point compression ultrasounds on enrolled patients. Subsequently a radiology department ultrasound was performed and used as the gold standard. Residents were instructed to save videos of their ultrasounds for technical analysis. Results: Overall, 288 two-point compression ultrasound studies were performed. There were 28 cases that were deemed to be positive for DVT by radiology ultrasound. Among these 28, 16 were identified by the residents with two-point compression. Among the 260 cases deemed to be negative for DVT by radiology ultrasound, 10 were thought to be positive by the residents using two-point compression. This led to a sensitivity of 57.1% (95% CI [38.8-75.5] and a specificity of 96.1% (95% CI [93.8-98.5] for resident-performed two-point compression ultrasound. This corresponds to a positive predictive value of 61.5% (95% CI [42.8-80.2] and a negative predictive value of 95.4% (95% CI [92.9-98.0]. The positive likelihood ratio is 14.9 (95% CI [7.5-29.5] and the negative likelihood ratio is 0.45 (95% CI [0.29-0.68]. Video analysis revealed that in four cases the resident did not identify a DVT because the thrombus was isolated to the superior femoral vein (SFV, which is not evaluated by two-point compression. Moreover, the video analysis revealed that the

  13. Two-point functions and logarithmic boundary operators in boundary logarithmic conformal field theories

    International Nuclear Information System (INIS)

    Ishimoto, Yukitaka

    2004-01-01

    Amongst conformal field theories, there exist logarithmic conformal field theories such as c p,1 models. We have investigated c p,q models with a boundary in search of logarithmic theories and have found logarithmic solutions of two-point functions in the context of the Coulomb gas picture. We have also found the relations between coefficients in the two-point functions and correlation functions of logarithmic boundary operators, and have confirmed the solutions in [hep-th/0003184]. Other two-point functions and boundary operators have also been studied in the free boson construction of boundary CFT with SU(2) k symmetry in regard to logarithmic theories. This paper is based on a part of D. Phil. Thesis [hep-th/0312160]. (author)

  14. Some exact results for the two-point function of an integrable quantum field theory

    International Nuclear Information System (INIS)

    Creamer, D.B.; Thacker, H.B.; Wilkinson, D.

    1981-01-01

    The two-point correlation function for the quantum nonlinear Schroedinger (one-dimensional delta-function gas) model is studied. An infinite-series representation for this function is derived using the quantum inverse-scattering formalism. For the case of zero temperature, the infinite-coupling (c→infinity) result of Jimbo, Miwa, Mori, and Sato is extended to give an exact expression for the order-1/c correction to the two-point function in terms of a Painleve transcendent of the fifth kind

  15. TLS for generating multi-LOD of 3D building model

    International Nuclear Information System (INIS)

    Akmalia, R; Setan, H; Majid, Z; Suwardhi, D; Chong, A

    2014-01-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown

  16. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    Science.gov (United States)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  17. TLS for generating multi-LOD of 3D building model

    Science.gov (United States)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  18. Infinite-component conformal fields. Spectral representation of the two-point function

    International Nuclear Information System (INIS)

    Zaikov, R.P.; Tcholakov, V.

    1975-01-01

    The infinite-component conformal fields (with respect to the stability subgroup) are considered. The spectral representation of the conformally invariant two-point function is obtained. This function is nonvanishing as/lso for one ''fundamental'' and one infinite-component field

  19. Conservation laws for a system of two point masses in general relativity

    International Nuclear Information System (INIS)

    Damour, Thibaut; Deruelle, Nathalie

    1981-01-01

    We study the symmetries of the generalized lagrangian of two point masses, in the post-post newtonian approximation of General Relativity. We deduce, via Noether's theorem, conservation laws for energy, linear and angular momentum, as well as a generalisation of the center-of-mass theorem [fr

  20. The finite temperature density matrix and two-point correlations in the antiferromagnetic XXZ chain

    Science.gov (United States)

    Göhmann, Frank; Hasenclever, Nils P.; Seel, Alexander

    2005-10-01

    We derive finite temperature versions of integral formulae for the two-point correlation functions in the antiferromagnetic XXZ chain. The derivation is based on the summation of density matrix elements characterizing a finite chain segment of length m. On this occasion we also supply a proof of the basic integral formula for the density matrix presented in an earlier publication.

  1. A New Numerical Algorithm for Two-Point Boundary Value Problems

    OpenAIRE

    Guo, Lihua; Wu, Boying; Zhang, Dazhi

    2014-01-01

    We present a new numerical algorithm for two-point boundary value problems. We first present the exact solution in the form of series and then prove that the n-term numerical solution converges uniformly to the exact solution. Furthermore, we establish the numerical stability and error analysis. The numerical results show the effectiveness of the proposed algorithm.

  2. Holographic two-point functions for 4d log-gravity

    NARCIS (Netherlands)

    Johansson, Niklas; Naseh, Ali; Zojer, Thomas

    We compute holographic one- and two-point functions of critical higher-curvature gravity in four dimensions. The two most important operators are the stress tensor and its logarithmic partner, sourced by ordinary massless and by logarithmic non-normalisable gravitons, respectively. In addition, the

  3. Generation of arbitrary two-point correlated directed networks with given modularity

    International Nuclear Information System (INIS)

    Zhou Jie; Xiao Gaoxi; Wong, Limsoon; Fu Xiuju; Ma, Stefan; Cheng, Tee Hiang

    2010-01-01

    In this Letter, we introduce measures of correlation in directed networks and develop an efficient algorithm for generating directed networks with arbitrary two-point correlation. Furthermore, a method is proposed for adjusting community structure in directed networks without changing the correlation. Effectiveness of both methods is verified by numerical results.

  4. Implementace algoritmu LoD terénu

    OpenAIRE

    Radil, Přemek

    2012-01-01

    Tato práce pojednává o implementaci algoritmu pro LoD vizualizaci terénu Seamless Patches for GPU-Based Terrain Rendering jako rozšíření knihovny Coin3D. Prezentuje postupy, za pomoci kterých tento algoritmus zobrazuje rozsáhlé terénní datasety. Celý terén je složen z plátů, které jsou uloženy v hierarchické struktuře. Hierarchie plátů je pak za běhu programu procházena jsou z ní generovány aktivní pláty na základě pozice pozorovatele. Každý plát se skládá z předem definovaných dlaždic a spoj...

  5. Proposal for a new LOD and multi-representation concept for CityGML

    NARCIS (Netherlands)

    Löwner, Marc-O; Gröger, Gerhard; Benner, Joachim; Biljecki, F.; Nagel, Claus; Dimopoulou, E.; van Oosterom, P.

    2016-01-01

    The Open Geospatial Consortium (OGC) CityGML standard offers a Level of Detail (LoD) concept that enables the representation of CityGML features from a very detailed to a less detailed description. Due to a rising application variety, the current LoD concept seems to be too inflexible. Here, we

  6. An LOD with improved breakdown voltage in full-frame CCD devices

    Science.gov (United States)

    Banghart, Edmund K.; Stevens, Eric G.; Doan, Hung Q.; Shepherd, John P.; Meisenzahl, Eric J.

    2005-02-01

    In full-frame image sensors, lateral overflow drain (LOD) structures are typically formed along the vertical CCD shift registers to provide a means for preventing charge blooming in the imager pixels. In a conventional LOD structure, the n-type LOD implant is made through the thin gate dielectric stack in the device active area and adjacent to the thick field oxidation that isolates the vertical CCD columns of the imager. In this paper, a novel LOD structure is described in which the n-type LOD impurities are placed directly under the field oxidation and are, therefore, electrically isolated from the gate electrodes. By reducing the electrical fields that cause breakdown at the silicon surface, this new structure permits a larger amount of n-type impurities to be implanted for the purpose of increasing the LOD conductivity. As a consequence of the improved conductance, the LOD width can be significantly reduced, enabling the design of higher resolution imaging arrays without sacrificing charge capacity in the pixels. Numerical simulations with MEDICI of the LOD leakage current are presented that identify the breakdown mechanism, while three-dimensional solutions to Poisson's equation are used to determine the charge capacity as a function of pixel dimension.

  7. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    Science.gov (United States)

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  8. One-Step Leapfrog LOD-BOR-FDTD Algorithm with CPML Implementation

    Directory of Open Access Journals (Sweden)

    Yi-Gang Wang

    2016-01-01

    Full Text Available An unconditionally stable one-step leapfrog locally one-dimensional finite-difference time-domain (LOD-FDTD algorithm towards body of revolution (BOR is presented. The equations of the proposed algorithm are obtained by the algebraic manipulation of those used in the conventional LOD-BOR-FDTD algorithm. The equations for z-direction electric and magnetic fields in the proposed algorithm should be treated specially. The new algorithm obtains a higher computational efficiency while preserving the properties of the conventional LOD-BOR-FDTD algorithm. Moreover, the convolutional perfectly matched layer (CPML is introduced into the one-step leapfrog LOD-BOR-FDTD algorithm. The equation of the one-step leapfrog CPML is concise. Numerical results show that its reflection error is small. It can be concluded that the similar CPML scheme can also be easily applied to the one-step leapfrog LOD-FDTD algorithm in the Cartesian coordinate system.

  9. Holographic two-point functions for Janus interfaces in the D1/D5 CFT

    Energy Technology Data Exchange (ETDEWEB)

    Chiodaroli, Marco [Department of Physics and Astronomy, Uppsala University, SE-75108 Uppsala (Sweden); Estes, John [Department of Physics, Long Island University,1 University Plaza, Brooklyn, NY 11201 (United States); Korovin, Yegor [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut, Am Mühlenberg 1, 14476 Golm (Germany)

    2017-04-26

    This paper investigates scalar perturbations in the top-down supersymmetric Janus solutions dual to conformal interfaces in the D1/D5 CFT, finding analytic closed-form solutions. We obtain an explicit representation of the bulk-to-bulk propagator and extract the two-point correlation function of the dual operator with itself, whose form is not fixed by symmetry alone. We give an expression involving the sum of conformal blocks associated with the bulk-defect operator product expansion and briefly discuss finite-temperature extensions. To our knowledge, this is the first computation of a two-point function which is not completely determined by symmetry for a fully-backreacted, top-down holographic defect.

  10. Duality of two-point functions for confined non-relativistic quark-antiquark systems

    International Nuclear Information System (INIS)

    Fishbane, P.M.; Gasiorowicz, S.G.; Kaus, P.

    1985-01-01

    An analog to the scattering matrix describes the spectrum and high-energy behavior of confined systems. We show that for non-relativistic systems this S-matrix is identical to a two-point function which transparently describes the bound states for all angular momenta. Confined systems can thus be described in a dual fashion. This result makes it possible to study the modification of linear trajectories (originating in a long-range confining potential) due to short range forces which are unknown except for the way in which they modify the asymptotic behavior of the two point function. A type of effective range expansion is one way to calculate the energy shifts. 9 refs

  11. Mean density and two-point correlation function for the CfA redshift survey slices

    International Nuclear Information System (INIS)

    De Lapparent, V.; Geller, M.J.; Huchra, J.P.

    1988-01-01

    The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample. 45 references

  12. Existence and uniqueness for a two-point interface boundary value problem

    Directory of Open Access Journals (Sweden)

    Rakhim Aitbayev

    2013-10-01

    Full Text Available We obtain sufficient conditions, easily verifiable, for the existence and uniqueness of piecewise smooth solutions of a linear two-point boundary-value problem with general interface conditions. The coefficients of the differential equation may have jump discontinuities at the interface point. As an example, the conditions obtained are applied to a problem with typical interface such as perfect contact, non-perfect contact, and flux jump conditions.

  13. Two-point method uncertainty during control and measurement of cylindrical element diameters

    Science.gov (United States)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  14. On one two-point BVP for the fourth order linear ordinary differential equation

    Czech Academy of Sciences Publication Activity Database

    Mukhigulashvili, Sulkhan; Manjikashvili, M.

    2017-01-01

    Roč. 24, č. 2 (2017), s. 265-275 ISSN 1072-947X Institutional support: RVO:67985840 Keywords : fourth order linear ordinary differential equations * two-point boundary value problems Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.290, year: 2016 https://www.degruyter.com/view/j/gmj.2017.24.issue-2/gmj-2016-0077/gmj-2016-0077.xml

  15. A model for the two-point velocity correlation function in turbulent channel flow

    International Nuclear Information System (INIS)

    Sahay, A.; Sreenivasan, K.R.

    1996-01-01

    A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics

  16. On one two-point BVP for the fourth order linear ordinary differential equation

    Czech Academy of Sciences Publication Activity Database

    Mukhigulashvili, Sulkhan; Manjikashvili, M.

    2017-01-01

    Roč. 24, č. 2 (2017), s. 265-275 ISSN 1072-947X Institutional support: RVO:67985840 Keywords : fourth order linear ordinary differential equations * two-point boundary value problems Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.290, year: 2016 https://www.degruyter.com/view/j/gmj.2017.24.issue-2/gmj-2016-0077/gmj-2016-0077. xml

  17. An integral constraint for the evolution of the galaxy two-point correlation function

    International Nuclear Information System (INIS)

    Peebles, P.J.E.; Groth, E.J.

    1976-01-01

    Under some conditions an integral over the galaxy two-point correlation function, xi(x,t), evolves with the expansion of the universe in a simple manner easily computed from linear perturbation theory.This provides a useful constraint on the possible evolution of xi(x,t) itself. We test the integral constraint with both an analytic model and numerical N-body simulations for the evolution of irregularities in an expanding universe. Some applications are discussed. (orig.) [de

  18. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    Science.gov (United States)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  19. Gauge-fixing parameter dependence of two-point gauge-variant correlation functions

    International Nuclear Information System (INIS)

    Zhai, C.

    1996-01-01

    The gauge-fixing parameter ξ dependence of two-point gauge-variant correlation functions is studied for QED and QCD. We show that, in three Euclidean dimensions, or for four-dimensional thermal gauge theories, the usual procedure of getting a general covariant gauge-fixing term by averaging over a class of covariant gauge-fixing conditions leads to a nontrivial gauge-fixing parameter dependence in gauge-variant two-point correlation functions (e.g., fermion propagators). This nontrivial gauge-fixing parameter dependence modifies the large-distance behavior of the two-point correlation functions by introducing additional exponentially decaying factors. These factors are the origin of the gauge dependence encountered in some perturbative evaluations of the damping rates and the static chromoelectric screening length in a general covariant gauge. To avoid this modification of the long-distance behavior introduced by performing the average over a class of covariant gauge-fixing conditions, one can either choose a vanishing gauge-fixing parameter or apply an unphysical infrared cutoff. copyright 1996 The American Physical Society

  20. Influence of LOD variations on seismic energy release

    Science.gov (United States)

    Riguzzi, F.; Krumm, F.; Wang, K.; Kiszely, M.; Varga, P.

    2009-04-01

    Tidal friction causes significant time variations of geodynamical parameters, among them geometrical flattening. The axial despinning of the Earth due to tidal friction through the change of flattening generates incremental meridional and azimuthal stresses. The stress pattern in an incompressible elastic upper mantle and crust is symmetric to the equator and has its inflection points at the critical latitude close to ±45°. Consequently the distribution of seismic energy released by strong, shallow focus earthquakes should have also sharp maxima at this latitude. To investigate the influence of length of day (LOD) variations on earthquake activity an earthquake catalogue of strongest seismic events (M>7.0) was completed for the period 1900-2007. It is shown with the use of this catalogue that for the studied time-interval the catalogue is complete and consists of the seismic events responsible for more than 90% of released seismic energy. Study of the catalogue for earthquakes M>7.0 shows that the seismic energy discharged by the strongest seismic events has significant maxima at ±45°, what renders probably that the seismic activity of our planet is influenced by an external component, i.e. by the tidal friction, which acts through the variation of the hydrostatic figure of the Earth caused by it. Distribution along the latitude of earthquake numbers and energies was investigated also for the case of global linear tectonic structures, such as mid ocean ridges and subduction zones. It can be shown that the number of the shallow focus shocks has a repartition along the latitude similar to the distribution of the linear tectonic structures. This means that the position of foci of seismic events is mainly controlled by the tectonic activity.

  1. A similarity hypothesis for the two-point correlation tensor in a temporally evolving plane wake

    Science.gov (United States)

    Ewing, D. W.; George, W. K.; Moser, R. D.; Rogers, M. M.

    1995-01-01

    The analysis demonstrated that the governing equations for the two-point velocity correlation tensor in the temporally evolving wake admit similarity solutions, which include the similarity solutions for the single-point moment as a special case. The resulting equations for the similarity solutions include two constants, beta and Re(sub sigma), that are ratios of three characteristic time scales of processes in the flow: a viscous time scale, a time scale characteristic of the spread rate of the flow, and a characteristic time scale of the mean strain rate. The values of these ratios depend on the initial conditions of the flow and are most likely measures of the coherent structures in the initial conditions. The occurrences of these constants in the governing equations for the similarity solutions indicates that these solutions, in general, will only be the same for two flows if these two constants are equal (and hence the coherent structures in the flows are related). The comparisons between the predictions of the similarity hypothesis and the data presented here and elsewhere indicate that the similarity solutions for the two-point correlation tensors provide a good approximation of the measures of those motions that are not significantly affected by the boundary conditions caused by the finite extent of real flows. Thus, the two-point similarity hypothesis provides a useful tool for both numerical and physical experimentalist that can be used to examine how the finite extent of real flows affect the evolution of the different scales of motion in the flow.

  2. A priori bounds for solutions of two-point boundary value problems using differential inequalities

    International Nuclear Information System (INIS)

    Vidossich, G.

    1979-01-01

    Two point boundary value problems for systems of differential equations are studied with a new approach based on differential inequalities of first order. This leads to the following results: (i) one-sided conditions are enough, in the sense that the inner product is substituted to the norm; (ii) the upper bound exists for practically any kind of equations and boundary value problem if the interval is sufficiently small since it depends on the Peano existence theorem; (iii) the bound seems convenient when the equation has some singularity in t as well as when sigular problems are considered. (author)

  3. Use of Green's functions in the numerical solution of two-point boundary value problems

    Science.gov (United States)

    Gallaher, L. J.; Perlin, I. E.

    1974-01-01

    This study investigates the use of Green's functions in the numerical solution of the two-point boundary value problem. The first part deals with the role of the Green's function in solving both linear and nonlinear second order ordinary differential equations with boundary conditions and systems of such equations. The second part describes procedures for numerical construction of Green's functions and considers briefly the conditions for their existence. Finally, there is a description of some numerical experiments using nonlinear problems for which the known existence, uniqueness or convergence theorems do not apply. Examples here include some problems in finding rendezvous orbits of the restricted three body system.

  4. On application of the S-matrix two-point function to nuclear data evaluation

    International Nuclear Information System (INIS)

    Igarasi, S.

    1992-01-01

    Statistical model calculation using S-matrix two-point function (STF) was tried. The results were compared with those calculated with the Hauser-Feshbach formula (HF) with and without resonance level-width fluctuation corrections (WFC). The STF gave almost the same cross sections as calculated using Moldauer's degrees of freedom for the χ 2 -distributions (MCD). The effect of the WFC to the final states in continuum was also studied using the HF with WFC of the MCD and of Porter-Thomas distribution (PTD). The HF with the MCD is recommended for practical calculation of the cross sections. (orig.)

  5. Futures market efficiency diagnostics via temporal two-point correlations. Russian market case study

    OpenAIRE

    Kopytin, Mikhail; Kazantsev, Evgeniy

    2013-01-01

    Using a two-point correlation technique, we study emergence of market efficiency in the emergent Russian futures market by focusing on lagged correlations. The correlation strength of leader-follower effects in the lagged inter-market correlations on the hourly time frame is seen to be significant initially (2009-2011) but gradually goes down, as the erstwhile leader instruments -- crude oil, the USD/RUB exchange rate, and the Russian stock market index -- seem to lose the leader status. An i...

  6. Two-point boundary value and Cauchy formulations in an axisymmetrical MHD equilibrium problem

    International Nuclear Information System (INIS)

    Atanasiu, C.V.; Subbotin, A.A.

    1999-01-01

    In this paper we present two equilibrium solvers for axisymmetrical toroidal configurations, both based on the expansion in poloidal angle method. The first one has been conceived as a two-point boundary value solver in a system of coordinates with straight field lines, while the second one uses a well-conditioned Cauchy formulation of the problem in a general curvilinear coordinate system. In order to check the capability of our moment methods to describe equilibrium accurately, a comparison of the moment solutions with analytical solutions obtained for a Solov'ev equilibrium has been performed. (author)

  7. Cycles, scaling and crossover phenomenon in length of the day (LOD) time series

    Science.gov (United States)

    Telesca, Luciano

    2007-06-01

    The dynamics of the temporal fluctuations of the length of the day (LOD) time series from January 1, 1962 to November 2, 2006 were investigated. The power spectrum of the whole time series has revealed annual, semi-annual, decadal and daily oscillatory behaviors, correlated with oceanic-atmospheric processes and interactions. The scaling behavior was analyzed by using the detrended fluctuation analysis (DFA), which has revealed two different scaling regimes, separated by a crossover timescale at approximately 23 days. Flicker-noise process can describe the dynamics of the LOD time regime involving intermediate and long timescales, while Brownian dynamics characterizes the LOD time series for small timescales.

  8. Two-point concrete resistivity measurements: interfacial phenomena at the electrode–concrete contact zone

    International Nuclear Information System (INIS)

    McCarter, W J; Taha, H M; Suryanto, B; Starrs, G

    2015-01-01

    Ac impedance spectroscopy measurements are used to critically examine the end-to-end (two-point) testing technique employed in evaluating the bulk electrical resistivity of concrete. In particular, this paper focusses on the interfacial contact region between the electrode and specimen and the influence of contacting medium and measurement frequency on the impedance response. Two-point and four-point electrode configurations were compared and modelling of the impedance response was undertaken to identify and quantify the contribution of the electrode–specimen contact region on the measured impedance. Measurements are presented in both Bode and Nyquist formats to aid interpretation. Concretes mixes conforming to BSEN206-1 and BS8500-1 were investigated which included concretes containing the supplementary cementitious materials fly ash and ground granulated blast-furnace slag. A measurement protocol is presented for the end-to-end technique in terms of test frequency and electrode–specimen contacting medium in order to minimize electrode–specimen interfacial effect and ensure correct measurement of bulk resistivity. (paper)

  9. Intrinsic strength of sodium borosilicate glass fibers by using a two-point bending technique

    International Nuclear Information System (INIS)

    Nishikubo, Y; Yoshida, S; Sugawara, T; Matsuoka, J

    2011-01-01

    Flaws existing on glass surface can be divided into two types, extrinsic and intrinsic. Although the extrinsic flaws are generated during processing and using, the intrinsic flaws are regarded as structural defects which result from thermal fluctuation. It is known that the extrinsic flaws determine glass strength, but effects of the intrinsic flaws on the glass strength are still unclear. Since it is considered that the averaged bond-strength and the intrinsic flaw would affect the intrinsic strength, the intrinsic strength of glass surely depends on the glass composition. In this study, the intrinsic failure strain of the glass fibers with the compositions of 20Na 2 O-40xB 2 O 3 -(80-40x)SiO 2 (mol%, x = 0, 0.5, 1.0, 1.5) were measured by using a two-point bending technique. The failure strength was estimated from the failure strain and Young's modulus of glass. It is elucidated that two-point bending strength of glass fiber decreases with increasing B 2 O 3 content in glass. The effects of the glass composition on the intrinsic strength are discussed in terms of elastic and inelastic deformation behaviors prior to fracture.

  10. Non-equilibrium scalar two point functions in AdS/CFT

    International Nuclear Information System (INIS)

    Keränen, Ville; Kleinert, Philipp

    2015-01-01

    In the first part of the paper, we discuss different versions of the AdS/CFT dictionary out of equilibrium. We show that the Skenderis-van Rees prescription and the “extrapolate” dictionary are equivalent at the level of “in-in” two point functions of free scalar fields in arbitrary asymptotically AdS spacetimes. In the second part of the paper, we calculate two point correlation functions in dynamical spacetimes using the “extrapolate” dictionary. These calculations are performed for conformally coupled scalar fields in examples of spacetimes undergoing gravitational collapse, the AdS 2 -Vaidya spacetime and the AdS 3 -Vaidya spacetime, which allow us to address the problem of thermalization following a quench in the boundary field theory. The computation of the correlators is formulated as an initial value problem in the bulk spacetime. Finally, we compare our results for AdS 3 -Vaidya to results in the previous literature obtained using the geodesic approximation and we find qualitative agreement.

  11. Non-equilibrium scalar two point functions in AdS/CFT

    Energy Technology Data Exchange (ETDEWEB)

    Keränen, Ville [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Kleinert, Philipp [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Merton College, University of Oxford,Merton Street, Oxford OX1 4JD (United Kingdom)

    2015-04-22

    In the first part of the paper, we discuss different versions of the AdS/CFT dictionary out of equilibrium. We show that the Skenderis-van Rees prescription and the “extrapolate” dictionary are equivalent at the level of “in-in” two point functions of free scalar fields in arbitrary asymptotically AdS spacetimes. In the second part of the paper, we calculate two point correlation functions in dynamical spacetimes using the “extrapolate” dictionary. These calculations are performed for conformally coupled scalar fields in examples of spacetimes undergoing gravitational collapse, the AdS{sub 2}-Vaidya spacetime and the AdS{sub 3}-Vaidya spacetime, which allow us to address the problem of thermalization following a quench in the boundary field theory. The computation of the correlators is formulated as an initial value problem in the bulk spacetime. Finally, we compare our results for AdS{sub 3}-Vaidya to results in the previous literature obtained using the geodesic approximation and we find qualitative agreement.

  12. A two-point diagnostic for the H II galaxy Hubble diagram

    Science.gov (United States)

    Leaf, Kyle; Melia, Fulvio

    2018-03-01

    A previous analysis of starburst-dominated H II galaxies and H II regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the Rh = ct universe, over Λcold dark matter (ΛCDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck ΛCDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck ΛCDM. But we also find that the reported errors in the H II measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of H II galaxies and H II regions as standard candles may be improved even further with a better handling of the systematics in these sources.

  13. Forecasting irregular variations of UT1-UTC and LOD data caused by ENSO

    Science.gov (United States)

    Niedzielski, T.; Kosek, W.

    2008-04-01

    The research focuses on prediction of LOD and UT1-UTC time series up to one-year in the future with the particular emphasis on the prediction improvement during El Nĩ o or La Nĩ a n n events. The polynomial-harmonic least-squares model is applied to fit the deterministic function to LOD data. The stochastic residuals computed as the difference between LOD data and the polynomial- harmonic model reveal the extreme values driven by El Nĩ o or La Nĩ a. These peaks are modeled by the n n stochastic bivariate autoregressive prediction. This approach focuses on the auto- and cross-correlations between LOD and the axial component of the atmospheric angular momentum. This technique allows one to derive more accurate predictions than purely univariate forecasts, particularly during El Nĩ o/La n Nĩ a events. n

  14. PROPOSAL FOR A NEW LOD AND MULTI-REPRESENTATION CONCEPT FOR CITYGML

    Directory of Open Access Journals (Sweden)

    M.-O. Löwner

    2016-10-01

    Full Text Available The Open Geospatial Consortium (OGC CityGML standard offers a Level of Detail (LoD concept that enables the representation of CityGML features from a very detailed to a less detailed description. Due to a rising application variety, the current LoD concept seems to be too inflexible. Here, we present a multi representation concept (MRC that enables a user-defined definition of LoDs. Because CityGML is an international standard, official profiles of the MRC are proposed. However, encoding of the defined profiles reveals many problems including mapping the conceptual model to the normative encoding, missing technologies and so on. Therefore, we propose to use the MRC as a meta model for the further definition of an LoD concept for CityGML 3.0.

  15. Application of General Regression Neural Network to the Prediction of LOD Change

    Science.gov (United States)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  16. The Use of Daily Geodetic UT1 and LOD Data in the Optimal Estimation of UT1 and LOD With the JPL Kalman Earth Orientation Filter

    Science.gov (United States)

    Freedman, A. P.; Steppe, J. A.

    1995-01-01

    The Jet Propulsion Laboratory Kalman Earth Orientation Filter (KEOF) uses several of the Earth rotation data sets available to generate optimally interpolated UT1 and LOD series to support spacecraft navigation. This paper compares use of various data sets within KEOF.

  17. Solving inverse two-point boundary value problems using collage coding

    Science.gov (United States)

    Kunze, H.; Murdock, S.

    2006-08-01

    The method of collage coding, with its roots in fractal imaging, is the central tool in a recently established rigorous framework for solving inverse initial value problems for ordinary differential equations (Kunze and Vrscay 1999 Inverse Problems 15 745-70). We extend these ideas to solve the following inverse problem: given a function u(x) on [A, B] (which may be the interpolation of data points), determine a two-point boundary value problem on [A, B] which admits u(x) as a solution as closely as desired. The solution of such inverse problems may be useful in parameter estimation or determination of potential functional forms of the underlying differential equation. We discuss ways to improve results, including the development of a partitioning scheme. Several examples are considered.

  18. Comments on the comparison of global methods for linear two-point boundary value problems

    International Nuclear Information System (INIS)

    de Boor, C.; Swartz, B.

    1977-01-01

    A more careful count of the operations involved in solving the linear system associated with collocation of a two-point boundary value problem using a rough splines reverses results recently reported by others in this journal. In addition, it is observed that the use of the technique of ''condensation of parameters'' can decrease the computer storage required. Furthermore, the use of a particular highly localized basis can also reduce the setup time when the mesh is irregular. Finally, operation counts are roughly estimated for the solution of certain linear system associated with two competing collocation methods; namely, collocation with smooth splines and collocation of the equivalent first order system with continuous piecewise polynomials

  19. Reconstruction of the 3D representative volume element from the generalized two-point correlation function

    International Nuclear Information System (INIS)

    Staraselski, Y; Brahme, A; Inal, K; Mishra, R K

    2015-01-01

    This paper presents the first application of three-dimensional (3D) cross-correlation microstructure reconstruction implemented for a representative volume element (RVE) to facilitate the microstructure engineering of materials. This has been accomplished by developing a new methodology for reconstructing 3D microstructure using experimental two-dimensional electron backscatter diffraction data. The proposed methodology is based on the analytical representation of the generalized form of the two-point correlation function—the distance-disorientation function (DDF). Microstructure reconstruction is accomplished by extending the simulated annealing techniques to perform three term reconstruction with a minimization of the DDF. The new 3D microstructure reconstruction algorithm is employed to determine the 3D RVE containing all of the relevant microstructure information for accurately computing the mechanical response of solids, especially when local microstructural variations influence the global response of the material as in the case of fracture initiation. (paper)

  20. Asymptotic behaviour of two-point functions in multi-species models

    Directory of Open Access Journals (Sweden)

    Karol K. Kozlowski

    2016-05-01

    Full Text Available We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU(3-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  1. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  2. Two-point discrimination and kinesthetic sense disorders in productive age individuals with carpal tunnel syndrome.

    Science.gov (United States)

    Wolny, Tomasz; Saulicz, Edward; Linek, Paweł; Myśliwiec, Andrzej

    2016-06-16

    The aim of this study was to evaluate two-point discrimination (2PD) sense and kinesthetic sense dysfunctions in carpal tunnel syndrome (CTS) patients compared with a healthy group. The 2PD sense, muscle force, and kinesthetic differentiation (KD) of strength; the range of motion in radiocarpal articulation; and KD of motion were assessed. The 2PD sense assessment showed significantly higher values in all the examined fingers in the CTS group than in those in the healthy group (p<0.01). There was a significant difference in the percentage value of error in KD of pincer and cylindrical grip (p<0.01) as well as in KD of flexion and extension movement in the radiocarpal articulation (p<0.01) between the studied groups. There are significant differences in the 2PD sense and KD of strength and movement between CTS patients compared with healthy individuals.

  3. The Nielsen identities for the two-point functions of QED and QCD

    International Nuclear Information System (INIS)

    Breckenridge, J.C.; Sasketchewan Univ., Saskatoon, SK; Lavelle, M.J.; Steele, T.G.; Sasketchewan Univ., Saskatoon, SK

    1995-01-01

    We consider the Nielsen identities for the two-point functions of full QCD and QED in the class of Lorentz gauges. For pedagogical reasons the identities are first derived in QED to demonstrate the gauge independence of the photon self-energy, and of the electron mass shell. In QCD we derive the general identity and hence the identities for the quark, gluon and ghost propagators. The explicit contributions to the gluon and ghost identities are calculated to one-loop order, and then we show that the quark identity requires that in on-shell schemes the quark mass renormalisation must be gauge independent. Furthermore, we obtain formal solutions for the gluon self-energy and ghost propagator in terms of the gauge dependence of other, independent Green functions. (orig.)

  4. Logarithmic two-point correlation functions from a z=2 Lifshitz model

    International Nuclear Information System (INIS)

    Zingg, T.

    2014-01-01

    The Einstein-Proca action is known to have asymptotically locally Lifshitz spacetimes as classical solutions. For dynamical exponent z=2, two-point correlation functions for fluctuations around such a geometry are derived analytically. It is found that the retarded correlators are stable in the sense that all quasinormal modes are situated in the lower half-plane of complex frequencies. Correlators in the longitudinal channel exhibit features that are reminiscent of a structure usually obtained in field theories that are logarithmic, i.e. contain an indecomposable but non-diagonalizable highest weight representation. This provides further evidence for conjecturing the model at hand as a candidate for a gravity dual of a logarithmic field theory with anisotropic scaling symmetry

  5. Two-point resistance of a resistor network embedded on a globe.

    Science.gov (United States)

    Tan, Zhi-Zhong; Essam, J W; Wu, F Y

    2014-07-01

    We consider the problem of two-point resistance in an (m-1) × n resistor network embedded on a globe, a geometry topologically equivalent to an m × n cobweb with its boundary collapsed into one single point. We deduce a concise formula for the resistance between any two nodes on the globe using a method of direct summation pioneered by one of us [Z.-Z. Tan, L. Zhou, and J. H. Yang, J. Phys. A: Math. Theor. 46, 195202 (2013)]. This method is contrasted with the Laplacian matrix approach formulated also by one of us [F. Y. Wu, J. Phys. A: Math. Gen. 37, 6653 (2004)], which is difficult to apply to the geometry of a globe. Our analysis gives the result in the form of a single summation.

  6. Solving Singular Two-Point Boundary Value Problems Using Continuous Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Omar Abu Arqub

    2012-01-01

    Full Text Available In this paper, the continuous genetic algorithm is applied for the solution of singular two-point boundary value problems, where smooth solution curves are used throughout the evolution of the algorithm to obtain the required nodal values. The proposed technique might be considered as a variation of the finite difference method in the sense that each of the derivatives is replaced by an appropriate difference quotient approximation. This novel approach possesses main advantages; it can be applied without any limitation on the nature of the problem, the type of singularity, and the number of mesh points. Numerical examples are included to demonstrate the accuracy, applicability, and generality of the presented technique. The results reveal that the algorithm is very effective, straightforward, and simple.

  7. Analysis on signal properties due to concurrent leaks at two points in water supply pipelines

    International Nuclear Information System (INIS)

    Lee, Young Sup

    2015-01-01

    Intelligent leak detection is an essential component of a underground water supply pipeline network such as a smart water grid system. In this network, numerous leak detection sensors are needed to cover all of the pipelines in a specific area installed at specific regular distances. It is also necessary to determine the existence of any leaks and estimate its location within a short time after it occurs. In this study, the leak signal properties and feasibility of leak location detection were investigated when concurrent leaks occurred at two points in a pipeline. The straight distance between the two leak sensors in the 100A sized cast-iron pipeline was 315.6 m, and their signals were measured with one leak and two concurrent leaks. Each leak location was described after analyzing the frequency properties and cross-correlation of the measured signals.

  8. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    Science.gov (United States)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  9. Analysis on signal properties due to concurrent leaks at two points in water supply pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Sup [Dept. of Embedded Systems Engineering, Incheon National University, Incheon (Korea, Republic of)

    2015-02-15

    Intelligent leak detection is an essential component of a underground water supply pipeline network such as a smart water grid system. In this network, numerous leak detection sensors are needed to cover all of the pipelines in a specific area installed at specific regular distances. It is also necessary to determine the existence of any leaks and estimate its location within a short time after it occurs. In this study, the leak signal properties and feasibility of leak location detection were investigated when concurrent leaks occurred at two points in a pipeline. The straight distance between the two leak sensors in the 100A sized cast-iron pipeline was 315.6 m, and their signals were measured with one leak and two concurrent leaks. Each leak location was described after analyzing the frequency properties and cross-correlation of the measured signals.

  10. Mutual information as a two-point correlation function in stochastic lattice models

    International Nuclear Information System (INIS)

    Müller, Ulrich; Hinrichsen, Haye

    2013-01-01

    In statistical physics entropy is usually introduced as a global quantity which expresses the amount of information that would be needed to specify the microscopic configuration of a system. However, for lattice models with infinitely many possible configurations per lattice site it is also meaningful to introduce entropy as a local observable that describes the information content of a single lattice site. Likewise, the mutual information between two sites can be interpreted as a two-point correlation function which quantifies how much information a lattice site has about the state of another one and vice versa. Studying a particular growth model we demonstrate that the mutual information exhibits scaling properties that are consistent with the established phenomenological scaling picture. (paper)

  11. Applying inversion to construct planar, rational spirals that satisfy two-point G(2) Hermite data

    CERN Document Server

    Kurnosenko, A

    2010-01-01

    A method of two-point G(2) Hermite interpolation with spirals is proposed. To construct a sought for curve, the inversion is applied to an arc of some other spiral. To illustrate the method, inversions of parabola are considered in detail. The resulting curve is 4th degree rational. The method allows the matching of a wide range of boundary conditions, including those which require an inflection. Although not all G(2) Hermite data can be matched with a spiral generated from a parabolic arc, introducing one intermediate G(2) data solves the problem. Expanding the method by involving other spirals arcs is also discussed. (C) 2009 Elsevier B.V. All rights reserved.

  12. Two-point paraxial traveltime formula for inhomogeneous isotropic and anisotropic media: Tests of accuracy

    KAUST Repository

    Waheed, Umair bin; Psencik, Ivan; Cerveny, Vlastislav; Iversen, Einar; Alkhalifah, Tariq Ali

    2013-01-01

    On several simple models of isotropic and anisotropic media, we have studied the accuracy of the two-point paraxial traveltime formula designed for the approximate calculation of the traveltime between points S' and R' located in the vicinity of points S and R on a reference ray. The reference ray may be situated in a 3D inhomogeneous isotropic or anisotropic medium with or without smooth curved interfaces. The twopoint paraxial traveltime formula has the form of the Taylor expansion of the two-point traveltime with respect to spatial Cartesian coordinates up to quadratic terms at points S and R on the reference ray. The constant term and the coefficients of the linear and quadratic terms are determined from quantities obtained from ray tracing and linear dynamic ray tracing along the reference ray. The use of linear dynamic ray tracing allows the evaluation of the quadratic terms in arbitrarily inhomogeneous media and, as shown by examples, it extends the region of accurate results around the reference ray between S and R (and even outside this interval) obtained with the linear terms only. Although the formula may be used for very general 3D models, we concentrated on simple 2D models of smoothly inhomogeneous isotropic and anisotropic (~8% and ~20% anisotropy) media only. On tests, in which we estimated twopoint traveltimes between a shifted source and a system of shifted receivers, we found that the formula may yield more accurate results than the numerical solution of an eikonal-based differential equation. The tests also indicated that the accuracy of the formula depends primarily on the length and the curvature of the reference ray and only weakly depends on anisotropy. The greater is the curvature of the reference ray, the narrower its vicinity, in which the formula yields accurate results.

  13. Two-point paraxial traveltime formula for inhomogeneous isotropic and anisotropic media: Tests of accuracy

    KAUST Repository

    Waheed, Umair bin

    2013-09-01

    On several simple models of isotropic and anisotropic media, we have studied the accuracy of the two-point paraxial traveltime formula designed for the approximate calculation of the traveltime between points S\\' and R\\' located in the vicinity of points S and R on a reference ray. The reference ray may be situated in a 3D inhomogeneous isotropic or anisotropic medium with or without smooth curved interfaces. The twopoint paraxial traveltime formula has the form of the Taylor expansion of the two-point traveltime with respect to spatial Cartesian coordinates up to quadratic terms at points S and R on the reference ray. The constant term and the coefficients of the linear and quadratic terms are determined from quantities obtained from ray tracing and linear dynamic ray tracing along the reference ray. The use of linear dynamic ray tracing allows the evaluation of the quadratic terms in arbitrarily inhomogeneous media and, as shown by examples, it extends the region of accurate results around the reference ray between S and R (and even outside this interval) obtained with the linear terms only. Although the formula may be used for very general 3D models, we concentrated on simple 2D models of smoothly inhomogeneous isotropic and anisotropic (~8% and ~20% anisotropy) media only. On tests, in which we estimated twopoint traveltimes between a shifted source and a system of shifted receivers, we found that the formula may yield more accurate results than the numerical solution of an eikonal-based differential equation. The tests also indicated that the accuracy of the formula depends primarily on the length and the curvature of the reference ray and only weakly depends on anisotropy. The greater is the curvature of the reference ray, the narrower its vicinity, in which the formula yields accurate results.

  14. Two-Point Incremental Forming with Partial Die: Theory and Experimentation

    Science.gov (United States)

    Silva, M. B.; Martins, P. A. F.

    2013-04-01

    This paper proposes a new level of understanding of two-point incremental forming (TPIF) with partial die by means of a combined theoretical and experimental investigation. The theoretical developments include an innovative extension of the analytical model for rotational symmetric single point incremental forming (SPIF), originally developed by the authors, to address the influence of the major operating parameters of TPIF and to successfully explain the differences in formability between SPIF and TPIF. The experimental work comprised the mechanical characterization of the material and the determination of its formability limits at necking and fracture by means of circle grid analysis and benchmark incremental sheet forming tests. Results show the adequacy of the proposed analytical model to handle the deformation mechanics of SPIF and TPIF with partial die and demonstrate that neck formation is suppressed in TPIF, so that traditional forming limit curves are inapplicable to describe failure and must be replaced by fracture forming limits derived from ductile damage mechanics. The overall geometric accuracy of sheet metal parts produced by TPIF with partial die is found to be better than that of parts fabricated by SPIF due to smaller elastic recovery upon unloading.

  15. Dynamics of Two Point Vortices in an External Compressible Shear Flow

    Science.gov (United States)

    Vetchanin, Evgeny V.; Mamaev, Ivan S.

    2017-12-01

    This paper is concerned with a system of equations that describes the motion of two point vortices in a flow possessing constant uniform vorticity and perturbed by an acoustic wave. The system is shown to have both regular and chaotic regimes of motion. In addition, simple and chaotic attractors are found in the system. Attention is given to bifurcations of fixed points of a Poincaré map which lead to the appearance of these regimes. It is shown that, in the case where the total vortex strength changes, the "reversible pitch-fork" bifurcation is a typical scenario of emergence of asymptotically stable fixed and periodic points. As a result of this bifurcation, a saddle point, a stable and an unstable point of the same period emerge from an elliptic point of some period. By constructing and analyzing charts of dynamical regimes and bifurcation diagrams we show that a cascade of period-doubling bifurcations is a typical scenario of transition to chaos in the system under consideration.

  16. An analytical approximation scheme to two-point boundary value problems of ordinary differential equations

    International Nuclear Information System (INIS)

    Boisseau, Bruno; Forgacs, Peter; Giacomini, Hector

    2007-01-01

    A new (algebraic) approximation scheme to find global solutions of two-point boundary value problems of ordinary differential equations (ODEs) is presented. The method is applicable for both linear and nonlinear (coupled) ODEs whose solutions are analytic near one of the boundary points. It is based on replacing the original ODEs by a sequence of auxiliary first-order polynomial ODEs with constant coefficients. The coefficients in the auxiliary ODEs are uniquely determined from the local behaviour of the solution in the neighbourhood of one of the boundary points. The problem of obtaining the parameters of the global (connecting) solutions, analytic at one of the boundary points, reduces to find the appropriate zeros of algebraic equations. The power of the method is illustrated by computing the approximate values of the 'connecting parameters' for a number of nonlinear ODEs arising in various problems in field theory. We treat in particular the static and rotationally symmetric global vortex, the skyrmion, the Abrikosov-Nielsen-Olesen vortex, as well as the 't Hooft-Polyakov magnetic monopole. The total energy of the skyrmion and of the monopole is also computed by the new method. We also consider some ODEs coming from the exact renormalization group. The ground-state energy level of the anharmonic oscillator is also computed for arbitrary coupling strengths with good precision. (fast track communication)

  17. Assessing Performance of Multipurpose Reservoir System Using Two-Point Linear Hedging Rule

    Science.gov (United States)

    Sasireka, K.; Neelakantan, T. R.

    2017-07-01

    Reservoir operation is the one of the important filed of water resource management. Innovative techniques in water resource management are focussed at optimizing the available water and in decreasing the environmental impact of water utilization on the natural environment. In the operation of multi reservoir system, efficient regulation of the release to satisfy the demand for various purpose like domestic, irrigation and hydropower can lead to increase the benefit from the reservoir as well as significantly reduces the damage due to floods. Hedging rule is one of the emerging techniques in reservoir operation, which reduce the severity of drought by accepting number of smaller shortages. The key objective of this paper is to maximize the minimum power production and improve the reliability of water supply for municipal and irrigation purpose by using hedging rule. In this paper, Type II two-point linear hedging rule is attempted to improve the operation of Bargi reservoir in the Narmada basin in India. The results obtained from simulation of hedging rule is compared with results from Standard Operating Policy, the result shows that the application of hedging rule significantly improved the reliability of water supply and reliability of irrigation release and firm power production.

  18. The association between gas and galaxies - II. The two-point correlation function

    Science.gov (United States)

    Wilman, R. J.; Morris, S. L.; Jannuzi, B. T.; Davé, R.; Shone, A. M.

    2007-02-01

    We measure the two-point correlation function, ξAG, between galaxies and quasar absorption-line systems at z 1017cm-2. For CIV absorbers, the peak strength of ξAG is roughly comparable to that of HI absorbers with NHI > 1016.5cm-2, consistent with the finding that the CIV absorbers are associated with strong HI absorbers. We do not reproduce the differences reported by Chen et al. between 1D ξAG measurements using galaxy subsamples of different spectral types. However, the full impact on the measurements of systematic differences in our samples is hard to quantify. We compare the observations with smoothed particle hydrodynamical (SPH) simulations and discover that in the observations ξAG is more concentrated to the smallest separations than in the simulations. The latter also display a `finger of god' elongation of ξAG along the LOS in redshift space, which is absent from our data, but similar to that found by Ryan-Weber for the cross-correlation of quasar absorbers and HI-emission-selected galaxies. The physical origin of these `fingers of god' is unclear, and we thus highlight several possible areas for further investigation.

  19. The variants of an LOD of a 3D building model and their influence on spatial analyses

    Science.gov (United States)

    Biljecki, Filip; Ledoux, Hugo; Stoter, Jantien; Vosselman, George

    2016-06-01

    The level of detail (LOD) of a 3D city model indicates the model's grade and usability. However, there exist multiple valid variants of each LOD. As a consequence, the LOD concept is inconclusive as an instruction for the acquisition of 3D city models. For instance, the top surface of an LOD1 block model may be modelled at the eaves of a building or at its ridge height. Such variants, which we term geometric references, are often overlooked and are usually not documented in the metadata. Furthermore, the influence of a particular geometric reference on the performance of a spatial analysis is not known. In response to this research gap, we investigate a variety of LOD1 and LOD2 geometric references that are commonly employed, and perform numerical experiments to investigate their relative difference when used as input for different spatial analyses. We consider three use cases (estimation of the area of the building envelope, building volume, and shadows cast by buildings), and compute the deviations in a Monte Carlo simulation. The experiments, carried out with procedurally generated models, indicate that two 3D models representing the same building at the same LOD, but modelled according to different geometric references, may yield substantially different results when used in a spatial analysis. The outcome of our experiments also suggests that the geometric reference may have a bigger influence than the LOD, since an LOD1 with a specific geometric reference may yield a more accurate result than when using LOD2 models.

  20. Linked open data creating knowledge out of interlinked data : results of the LOD2 project

    CERN Document Server

    Bryl, Volha; Tramp, Sebastian

    2014-01-01

    Linked Open Data (LOD) is a pragmatic approach for realizing the Semantic Web vision of making the Web a global, distributed, semantics-based information system. This book presents an overview on the results of the research project “LOD2 -- Creating Knowledge out of Interlinked Data”. LOD2 is a large-scale integrating project co-funded by the European Commission within the FP7 Information and Communication Technologies Work Program. Commencing in September 2010, this 4-year project comprised leading Linked Open Data research groups, companies, and service providers from across 11 European countries and South Korea. The aim of this project was to advance the state-of-the-art in research and development in four key areas relevant for Linked Data, namely 1. RDF data management; 2. the extraction, creation, and enrichment of structured RDF data; 3. the interlinking and fusion of Linked Data from different sources and 4. the authoring, exploration and visualization of Linked Data.

  1. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    Science.gov (United States)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  2. Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model

    Science.gov (United States)

    Liu, Q. B.; Wang, Q. J.; Lei, M. F.

    2015-09-01

    It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.

  3. Secular changes of LOD associated with a growth of the inner core

    Science.gov (United States)

    Denis, C.; Rybicki, K. R.; Varga, P.

    2006-05-01

    From recent estimates of the age of the inner core based on the theory of thermal evolution of the core, we estimate that nowadays the growth of the inner core may perhaps contribute to the observed overall secular increase of LOD caused mainly by tidal friction (i.e., 1.72 ms per century) by a relative decrease of 2 to 7 μs per century. Another, albeit much less plausible, hypothesis is that crystallization of the inner core does not produce any change of LOD, but makes the inner core rotate differentially with respect to the outer core and mantle.

  4. Application of LOD technology to the economic residence GIS for industry and commerce administration

    Science.gov (United States)

    Song, Yongjun; Feng, Xuezhi; Zhao, Shuhe; Yin, Haiwei; Li, Yulin; Cui, Hongxia; Zhang, Hui; Zhong, Quanbao

    2007-06-01

    The LOD technology has an impact upon the multi-scale representation of spatial database. This paper takes advantage of LOD technology to express the multi-scale geographical data, and establish the exchange of multi-scale electronic map, further attain the goal that the details of geographic features such as point, line and polygon can be displayed more and more clearly with the display scale being enlarged to be convenient for the personnel of all offices of industry and commerce administration to label the locations of the corporations or enterprises.

  5. Status and Prospects for Combined GPS LOD and VLBI UT1 Measurements

    Science.gov (United States)

    Senior, K.; Kouba, J.; Ray, J.

    2010-01-01

    A Kalman filter was developed to combine VLBI estimates of UT1-TAI with biased length of day (LOD) estimates from GPS. The VLBI results are the analyses of the NASA Goddard Space Flight Center group from 24-hr multi-station observing sessions several times per week and the nearly daily 1-hr single-baseline sessions. Daily GPS LOD estimates from the International GNSS Service (IGS) are combined with the VLBI UT1-TAI by modeling the natural excitation of LOD as the integral of a white noise process (i.e., as a random walk) and the UT1 variations as the integration of LOD, similar to the method described by Morabito et al. (1988). To account for GPS technique errors, which express themselves mostly as temporally correlated biases in the LOD measurements, a Gauss-Markov model has been added to assimilate the IGS data, together with a fortnightly sinusoidal term to capture errors in the IGS treatments of tidal effects. Evaluated against independent atmospheric and oceanic axial angular momentum (AAM + OAM) excitations and compared to other UT1/LOD combinations, ours performs best overall in terms of lowest RMS residual and highest correlation with (AAM + OAM) over sliding intervals down to 3 d. The IERS 05C04 and Bulletin A combinations show strong high-frequency smoothing and other problems. Until modified, the JPL SPACE series suffered in the high frequencies from not including any GPS-based LODs. We find, surprisingly, that further improvements are possible in the Kalman filter combination by selective rejection of some VLBI data. The best combined results are obtained by excluding all the 1-hr single-baseline UT1 data as well as those 24-hr UT1 measurements with formal errors greater than 5 μs (about 18% of the multi-baseline sessions). A rescaling of the VLBI formal errors, rather than rejection, was not an effective strategy. These results suggest that the UT1 errors of the 1-hr and weaker 24-hr VLBI sessions are non-Gaussian and more heterogeneous than expected

  6. Aspects Of 40- to 50-Day Oscillations In LOD And AAM

    Science.gov (United States)

    Dickey, Jean O.; Marcus, Steven L.; Ghil, Michael

    1992-01-01

    Report presents study of fluctuations in rotation of Earth, focusing on irregular intraseasonal oscillations in length of day (LOD) and atmospheric angular momentum (AAM) with periods varying from 40 to 50 days. Study draws upon and extends results of prior research.

  7. Exploring the Processes of Generating LOD (0-2) Citygml Models in Greater Municipality of Istanbul

    Science.gov (United States)

    Buyuksalih, I.; Isikdag, U.; Zlatanova, S.

    2013-08-01

    3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e. common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.

  8. LOD-A-lot : A single-file enabler for data science

    NARCIS (Netherlands)

    Beek, Wouter; Ferńandez, Javier D.; Verborgh, Ruben

    2017-01-01

    Many data scientists make use of Linked Open Data (LOD) as a huge interconnected knowledge base represented in RDF. However, the distributed nature of the information and the lack of a scalable approach to manage and consume such Big Semantic Data makes it difficult and expensive to conduct

  9. A power study of bivariate LOD score analysis of a complex trait and fear/discomfort with strangers.

    Science.gov (United States)

    Ji, Fei; Lee, Dayoung; Mendell, Nancy Role

    2005-12-30

    Complex diseases are often reported along with disease-related traits (DRT). Sometimes investigators consider both disease and DRT phenotypes separately and sometimes they consider individuals as affected if they have either the disease or the DRT, or both. We propose instead to consider the joint distribution of the disease and the DRT and do a linkage analysis assuming a pleiotropic model. We evaluated our results through analysis of the simulated datasets provided by Genetic Analysis Workshop 14. We first conducted univariate linkage analysis of the simulated disease, Kofendrerd Personality Disorder and one of its simulated associated traits, phenotype b (fear/discomfort with strangers). Subsequently, we considered the bivariate phenotype, which combined the information on Kofendrerd Personality Disorder and fear/discomfort with strangers. We developed a program to perform bivariate linkage analysis using an extension to the Elston-Stewart peeling method of likelihood calculation. Using this program we considered the microsatellites within 30 cM of the gene pleiotropic for this simulated disease and DRT. Based on 100 simulations of 300 families we observed excellent power to detect linkage within 10 cM of the disease locus using the DRT and the bivariate trait.

  10. LOD BIM Element specification for Railway Turnout Systems Risk Mitigation using the Information Delivery Manual

    Science.gov (United States)

    Gigante-Barrera, Ángel; Dindar, Serdar; Kaewunruen, Sakdirat; Ruikar, Darshan

    2017-10-01

    Railway turnouts are complex systems designed using complex geometries and grades which makes them difficult to be managed in terms of risk prevention. This feature poses a substantial peril to rail users as it is considered a cause of derailment. In addition, derailment deals to financial losses due to operational downtimes and monetary compensations in case of death or injure. These are fundamental drivers to consider mitigating risks arising from poor risk management during design. Prevention through design (PtD) is a process that introduces tacit knowledge from industry professionals during the design process. There is evidence that Building Information Modelling (BIM) can help to mitigate risk since the inception of the project. BIM is considered an Information System (IS) were tacit knowledge can be stored and retrieved from a digital database making easy to take promptly decisions as information is ready to be analysed. BIM at the model element level entails working with 3D elements and embedded data, therefore adding a layer of complexity to the management of information along the different stages of the project and across different disciplines. In order to overcome this problem, the industry has created a framework for model progression specification named Level of Development (LOD). The paper presents an IDM based framework for design risk mitigation through code validation using the LOD. This effort resulted on risk datasets which describe graphically and non-graphically a rail turnout as the model progresses. Thus, permitting its inclusion within risk information systems. The assignment of an LOD construct to a set of data, requires specialised management and process related expertise. Furthermore, the selection of a set of LOD constructs requires a purpose based analysis. Therefore, a framework for LOD constructs implementation within the IDM for code checking is required for the industry to progress in this particular field.

  11. Spin-k/2-spin-k/2 SU(2) two-point functions on the torus

    International Nuclear Information System (INIS)

    Kirsch, Ingo; Kucharski, Piotr

    2012-11-01

    We discuss a class of two-point functions on the torus of primary operators in the SU(2) Wess-Zumino-Witten model at integer level k. In particular, we construct an explicit expression for the current blocks of the spin-(k)/(2)-spin-(k)/(2) torus two-point functions for all k. We first examine the factorization limits of the proposed current blocks and test their monodromy properties. We then prove that the current blocks solve the corresponding Knizhnik-Zamolodchikov-like differential equations using the method of Mathur, Mukhi and Sen.

  12. Spin-k/2-spin-k/2 SU(2) two-point functions on the torus

    Energy Technology Data Exchange (ETDEWEB)

    Kirsch, Ingo [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Kucharski, Piotr [Warsaw Univ. (Poland). Inst. of Theoretical Physics

    2012-11-15

    We discuss a class of two-point functions on the torus of primary operators in the SU(2) Wess-Zumino-Witten model at integer level k. In particular, we construct an explicit expression for the current blocks of the spin-(k)/(2)-spin-(k)/(2) torus two-point functions for all k. We first examine the factorization limits of the proposed current blocks and test their monodromy properties. We then prove that the current blocks solve the corresponding Knizhnik-Zamolodchikov-like differential equations using the method of Mathur, Mukhi and Sen.

  13. EXISTENCE OF POSITIVE SOLUTION TO TWO-POINT BOUNDARY VALUE PROBLEM FOR A SYSTEM OF SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, we consider a two-point boundary value problem for a system of second order ordinary differential equations. Under some conditions, we show the existence of positive solution to the system of second order ordinary differential equa-tions.

  14. Unique solvability of some two-point boundary value problems for linear functional differential equations with singularities

    Czech Academy of Sciences Publication Activity Database

    Rontó, András; Samoilenko, A. M.

    2007-01-01

    Roč. 41, - (2007), s. 115-136 ISSN 1512-0015 R&D Projects: GA ČR(CZ) GA201/06/0254 Institutional research plan: CEZ:AV0Z10190503 Keywords : two-point problem * functional differential equation * singular boundary problem Subject RIV: BA - General Mathematics

  15. Critical two-point functions and the lace expansion for spread-out high-dimensional percolation and related models

    NARCIS (Netherlands)

    Hara, T.; Hofstad, van der R.W.; Slade, G.

    2003-01-01

    We consider spread-out models of self-avoiding walk, bond percolation, lattice trees and bond lattice animals on ${\\mathbb{Z}^d}$, having long finite-range connections, above their upper critical dimensions $d=4$ (self-avoiding walk), $d=6$ (percolation) and $d=8$ (trees and animals). The two-point

  16. LOD +: Augmenting LOD with Skeletons

    OpenAIRE

    Lange , Benoit; Rodriguez , Nancy

    2010-01-01

    International audience; Until now computer graphic researchers have tried to solve visualization problems introduced by the size of meshes. Modern tools produce large models and hardware is not able to render them in full resolution. For example, the digital Michelangelo project extracted a model with more than one billion polygons. One can notice hardware has become more and more powerful but meshes have also become more and more complex. To solve this issue, people have worked on many solut...

  17. Strategy for determination of LOD and LOQ values--some basic aspects.

    Science.gov (United States)

    Uhrovčík, Jozef

    2014-02-01

    The paper is devoted to the evaluation of limit of detection (LOD) and limit of quantification (LOQ) values in concentration domain by using 4 different approaches; namely 3σ and 10σ approaches, ULA2 approach, PBA approach and MDL approach. Brief theoretical analyses of all above mentioned approaches are given together with directions for their practical use. Calculations and correct calibration design are exemplified by using of electrothermal atomic absorption spectrometry for determination of lead in drinking water sample. These validation parameters reached 1.6 μg L(-1) (LOD) and 5.4 μg L(-1) (LOQ) by using 3σ and 10σ approaches. For obtaining relevant values of analyte concentration the influence of calibration design and measurement methodology were examined. The most preferred technique has proven to be a method of preconcentration of the analyte on the surface of the graphite cuvette (boost cycle). © 2013 Elsevier B.V. All rights reserved.

  18. Estimation of the POD function and the LOD of a qualitative microbiological measurement method.

    Science.gov (United States)

    Wilrich, Cordula; Wilrich, Peter-Theodor

    2009-01-01

    Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.

  19. Efficient Simplification Methods for Generating High Quality LODs of 3D Meshes

    Institute of Scientific and Technical Information of China (English)

    Muhammad Hussain

    2009-01-01

    Two simplification algorithms are proposed for automatic decimation of polygonal models, and for generating their LODs. Each algorithm orders vertices according to their priority values and then removes them iteratively. For setting the priority value of each vertex, exploiting normal field of its one-ring neighborhood, we introduce a new measure of geometric fidelity that reflects well the local geometric features of the vertex. After a vertex is selected, using other measures of geometric distortion that are based on normal field deviation and distance measure, it is decided which of the edges incident on the vertex is to be collapsed for removing it. The collapsed edge is substituted with a new vertex whose position is found by minimizing the local quadric error measure. A comparison with the state-of-the-art algorithms reveals that the proposed algorithms are simple to implement, are computationally more efficient, generate LODs with better quality, and preserve salient features even after drastic simplification. The methods are useful for applications such as 3D computer games, virtual reality, where focus is on fast running time, reduced memory overhead, and high quality LODs.

  20. Improvement of LOD in Fluorescence Detection with Spectrally Nonuniform Background by Optimization of Emission Filtering.

    Science.gov (United States)

    Galievsky, Victor A; Stasheuski, Alexander S; Krylov, Sergey N

    2017-10-17

    The limit-of-detection (LOD) in analytical instruments with fluorescence detection can be improved by reducing noise of optical background. Efficiently reducing optical background noise in systems with spectrally nonuniform background requires complex optimization of an emission filter-the main element of spectral filtration. Here, we introduce a filter-optimization method, which utilizes an expression for the signal-to-noise ratio (SNR) as a function of (i) all noise components (dark, shot, and flicker), (ii) emission spectrum of the analyte, (iii) emission spectrum of the optical background, and (iv) transmittance spectrum of the emission filter. In essence, the noise components and the emission spectra are determined experimentally and substituted into the expression. This leaves a single variable-the transmittance spectrum of the filter-which is optimized numerically by maximizing SNR. Maximizing SNR provides an accurate way of filter optimization, while a previously used approach based on maximizing a signal-to-background ratio (SBR) is the approximation that can lead to much poorer LOD specifically in detection of fluorescently labeled biomolecules. The proposed filter-optimization method will be an indispensable tool for developing new and improving existing fluorescence-detection systems aiming at ultimately low LOD.

  1. A Microfluidic Lab-on-a-Disc (LOD for Antioxidant Activities of Plant Extracts

    Directory of Open Access Journals (Sweden)

    Nurhaslina Abd Rahman

    2018-03-01

    Full Text Available Antioxidants are an important substance that can fight the deterioration of free radicals and can easily oxidize when exposed to light. There are many methods to measure the antioxidant activity in a biological sample, for example 2,2-diphenyl-1-picrylhydrazyl (DPPH antioxidant activity test, which is one of the simplest methods used. Despite its simplicity, the organic solvent that has been used to dilute DPPH is easily evaporated and degraded with respect to light exposure and time. Thus, it needs to be used at the earliest convenient time prior to the experiment. To overcome this issue, a rapid and close system for antioxidant activity is required. In this paper, we introduced the Lab-on-a-Disc (LoD method that integrates the DPPH antioxidant activity test on a microfluidic compact disc (CD. We used ascorbic acid, quercetin, Areca catechu, Polygonum minus, and Syzygium polyanthum plant extracts to compare the results of our proposed LoD method with the conventional method. Contrasted to the arduous laborious conventional method, our proposed method offer rapid analysis and simple determination of antioxidant. This proposed LoD method for antioxidant activity in plants would be a platform for the further development of antioxidant assay.

  2. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    Science.gov (United States)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  3. LOD-based clustering techniques for efficient large-scale terrain storage and visualization

    Science.gov (United States)

    Bao, Xiaohong; Pajarola, Renato

    2003-05-01

    Large multi-resolution terrain data sets are usually stored out-of-core. To visualize terrain data at interactive frame rates, the data needs to be organized on disk, loaded into main memory part by part, then rendered efficiently. Many main-memory algorithms have been proposed for efficient vertex selection and mesh construction. Organization of terrain data on disk is quite difficult because the error, the triangulation dependency and the spatial location of each vertex all need to be considered. Previous terrain clustering algorithms did not consider the per-vertex approximation error of individual terrain data sets. Therefore, the vertex sequences on disk are exactly the same for any terrain. In this paper, we propose a novel clustering algorithm which introduces the level-of-detail (LOD) information to terrain data organization to map multi-resolution terrain data to external memory. In our approach the LOD parameters of the terrain elevation points are reflected during clustering. The experiments show that dynamic loading and paging of terrain data at varying LOD is very efficient and minimizes page faults. Additionally, the preprocessing of this algorithm is very fast and works from out-of-core.

  4. Effects of lodoxamide (LOD), disodium cromoglycate (DSCG) and N-acetyl-aspartyl-glutamate sodium salt (NAAGA) on ocular active anaphylaxis.

    Science.gov (United States)

    Goldschmidt, P; Luyckx, J

    1996-04-01

    LOD, DSCG and NAAGA eye-drops were evaluated on experimentally-induced ocular active anaphylaxis in guinea pigs. Twelve animals per group were sensitized with egg albumin i.p. and challenged on the surface of the eye 14 days later. Two days before challenge, animals were treated with LOD, DSCG or NAAGA 4 times a day. Permeability indexes were calculated after intracardiac injection of Evans Blue. No effect on ocular active anaphylaxis was found with LOD nor with DSCG. NAAGA was able to significantly reduce blood-eye permeability indexes.

  5. Renormalization group summation, spectrality constraints, and coupling constant analyticity for phenomenological applications of two-point correlators in QCD

    International Nuclear Information System (INIS)

    Pivovarov, A.A.

    2003-01-01

    The analytic structure in the strong coupling constant that emerges for some observables in QCD after duality averaging of renormalization-group-improved amplitudes is discussed, and the validity of the infrared renormalon hypothesis for the determination of this structure is critically reexamined. A consistent description of peculiar features of perturbation theory series related to hypothetical infrared renormalons and corresponding power corrections is considered. It is shown that perturbation theory series for the spectral moments of two-point correlators of hadronic currents in QCD can explicitly be summed in all orders using the definition of the moments that avoids integration through the infrared region in momentum space. Such a definition of the moments relies on the analytic properties of two-point correlators in the momentum variable that allows for shifting the integration contour into the complex plane of the momentum. For definiteness, an explicit case of gluonic current correlators is discussed in detail

  6. Dynamical pairwise entanglement and two-point correlations in the three-ligand spin-star structure

    Science.gov (United States)

    Motamedifar, M.

    2017-10-01

    We consider the three-ligand spin-star structure through homogeneous Heisenberg interactions (XXX-3LSSS) in the framework of dynamical pairwise entanglement. It is shown that the time evolution of the central qubit ;one-particle; state (COPS) brings about the generation of quantum W states at periodical time instants. On the contrary, W states cannot be generated from the time evolution of a ligand ;one-particle; state (LOPS). We also investigate the dynamical behavior of two-point quantum correlations as well as the expectation values of the different spin-components for each element in the XXX-3LSSS. It is found that when a W state is generated, the same value of the concurrence between any two arbitrary qubits arises from the xx and yy two-point quantum correlations. On the opposite, zz quantum correlation between any two qubits vanishes at these time instants.

  7. On two-point boundary correlations in the six-vertex model with domain wall boundary conditions

    Science.gov (United States)

    Colomo, F.; Pronko, A. G.

    2005-05-01

    The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.

  8. Two-point anchoring of a lanthanide-binding peptide to a target protein enhances the paramagnetic anisotropic effect

    International Nuclear Information System (INIS)

    Saio, Tomohide; Ogura, Kenji; Yokochi, Masashi; Kobashigawa, Yoshihiro; Inagaki, Fuyuhiko

    2009-01-01

    Paramagnetic lanthanide ions fixed in a protein frame induce several paramagnetic effects such as pseudo-contact shifts and residual dipolar couplings. These effects provide long-range distance and angular information for proteins and, therefore, are valuable in protein structural analysis. However, until recently this approach had been restricted to metal-binding proteins, but now it has become applicable to non-metalloproteins through the use of a lanthanide-binding tag. Here we report a lanthanide-binding peptide tag anchored via two points to the target proteins. Compared to conventional single-point attached tags, the two-point linked tag provides two to threefold stronger anisotropic effects. Though there is slight residual mobility of the lanthanide-binding tag, the present tag provides a higher anisotropic paramagnetic effect

  9. Expanded uncertainty associated with determination of isotope enrichment factors: Comparison of two point calculation and Rayleigh-plot.

    Science.gov (United States)

    Julien, Maxime; Gilbert, Alexis; Yamada, Keita; Robins, Richard J; Höhener, Patrick; Yoshida, Naohiro; Remaud, Gérald S

    2018-01-01

    The enrichment factor (ε) is a common way to express Isotope Effects (IEs) associated with a phenomenon. Many studies determine ε using a Rayleigh-plot, which needs multiple data points. More recent articles describe an alternative method using the Rayleigh equation that allows the determination of ε using only one experimental point, but this method is often subject to controversy. However, a calculation method using two points (one experimental point and one at t 0 ) should lead to the same results because the calculation is derived from the Rayleigh equation. But, it is frequently asked "what is the valid domain of use of this two point calculation?" The primary aim of the present work is a systematic comparison of results obtained with these two methodologies and the determination of the conditions required for the valid calculation of ε. In order to evaluate the efficiency of the two approaches, the expanded uncertainty (U) associated with determining ε has been calculated using experimental data from three published articles. The second objective of the present work is to describe how to determine the expanded uncertainty (U) associated with determining ε. Comparative methodologies using both Rayleigh-plot and two point calculation are detailed and it is clearly demonstrated that calculation of ε using a single data point can give the same result as a Rayleigh-plot provided one strict condition is respected: that the experimental value is measured at a small fraction of unreacted substrate (f < 30%). This study will help stable isotope users to present their results in a more rigorous expression: ε ± U and therefore to define better the significance of an experimental results prior interpretation. Capsule: Enrichment factor can be determined through two different methods and the calculation of associated expanded uncertainty allows checking its significance. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Cosmological model-independent test of ΛCDM with two-point diagnostic by the observational Hubble parameter data

    Science.gov (United States)

    Cao, Shu-Lei; Duan, Xiao-Wei; Meng, Xiao-Lei; Zhang, Tong-Jie

    2018-04-01

    Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range 0 measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of Omh^2 fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 σ confidence interval. Therefore, we conclude that the ΛCDM model cannot be ruled out.

  11. On the solution of two-point linear differential eigenvalue problems. [numerical technique with application to Orr-Sommerfeld equation

    Science.gov (United States)

    Antar, B. N.

    1976-01-01

    A numerical technique is presented for locating the eigenvalues of two point linear differential eigenvalue problems. The technique is designed to search for complex eigenvalues belonging to complex operators. With this method, any domain of the complex eigenvalue plane could be scanned and the eigenvalues within it, if any, located. For an application of the method, the eigenvalues of the Orr-Sommerfeld equation of the plane Poiseuille flow are determined within a specified portion of the c-plane. The eigenvalues for alpha = 1 and R = 10,000 are tabulated and compared for accuracy with existing solutions.

  12. Scaling behaviour of the correlation length for the two-point correlation function in the random field Ising chain

    Energy Technology Data Exchange (ETDEWEB)

    Lange, Adrian; Stinchcombe, Robin [Theoretical Physics, University of Oxford, Oxford (United Kingdom)

    1996-07-07

    We study the general behaviour of the correlation length {zeta}(kT:h) for two-point correlation function of the local fields in an Ising chain with binary distributed fields. At zero field it is shown that {zeta} is the same as the zero-field correlation length for the spin-spin correlation function. For the field-dominated behaviour of {zeta} we find an exponent for the power-law divergence which is smaller than the exponent for the spin-spin correlation length. The entire behaviour of the correlation length can be described by a single crossover scaling function involving the new critical exponent. (author)

  13. ANIMATION STRATEGIES FOR SMOOTH TRANSFORMATIONS BETWEEN DISCRETE LODS OF 3D BUILDING MODELS

    Directory of Open Access Journals (Sweden)

    M. Kada

    2016-06-01

    Full Text Available The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  14. Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models

    Science.gov (United States)

    Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias

    2016-06-01

    The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  15. A comparison of LOD and UT1-UTC forecasts by different combined prediction techniques

    Science.gov (United States)

    Kosek, W.; Kalarus, M.; Johnson, T. J.; Wooden, W. H.; McCarthy, D. D.; Popiński, W.

    Stochastic prediction techniques including autocovariance, autoregressive, autoregressive moving average, and neural networks were applied to the UT1-UTC and Length of Day (LOD) International Earth Rotation and Reference Systems Servive (IERS) EOPC04 time series to evaluate the capabilities of each method. All known effects such as leap seconds and solid Earth zonal tides were first removed from the observed values of UT1-UTC and LOD. Two combination procedures were applied to predict the resulting LODR time series: 1) the combination of the least-squares (LS) extrapolation with a stochastic predition method, and 2) the combination of the discrete wavelet transform (DWT) filtering and a stochastic prediction method. The results of the combination of the LS extrapolation with different stochastic prediction techniques were compared with the results of the UT1-UTC prediction method currently used by the IERS Rapid Service/Prediction Centre (RS/PC). It was found that the prediction accuracy depends on the starting prediction epochs, and for the combined forecast methods, the mean prediction errors for 1 to about 70 days in the future are of the same order as those of the method used by the IERS RS/PC.

  16. Highly sensitive lactate biosensor by engineering chitosan/PVI-Os/CNT/LOD network nanocomposite.

    Science.gov (United States)

    Cui, Xiaoqiang; Li, Chang Ming; Zang, Jianfeng; Yu, Shucong

    2007-06-15

    A novel chitosan/PVI-Os(polyvinylimidazole-Os)/CNT(carbon nanotube)/LOD (lactate oxidase) network nanocomposite was constructed on gold electrode for detection of lactate. The composite was nanoengineered by selected matched material components and optimized composition ratio to produce a superior lactate sensor. Positively charged chitosan and PVI-Os were used as the matrix and the mediator to immobilize the negatively charged LOD and to enhance the electron transfer, respectively. CNTs were introduced as the essential component in the composite for the network nanostructure. FESEM (field emission scan electron microscopy) and electrochemical characterization demonstrated that CNT behaved as a cross-linker to network PVI and chitosan due to its nanoscaled and negative charged nature. This significantly improved the conductivity, stability and electroactivity for detection of lactate. The standard deviation of the sensor without CNT in the composite was greatly reduced from 19.6 to 4.9% by addition of CNTs. With optimized conditions the sensitivity and detection limit of the lactate sensor was 19.7 microA mM(-1)cm(-2) and 5 microM, respectively. The sensitivity was remarkably improved in comparison to the newly reported values of 0.15-3.85 microA mM(-1)cm(-2). This novel nanoengineering approach for selecting matched components to form a network nanostructure could be extended to other enzyme biosensors, and to have broad potential applications in diagnostics, life science and food analysis.

  17. CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds

    Science.gov (United States)

    Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol

    The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.

  18. Two-point active microrheology in a viscous medium exploiting a motional resonance excited in dual-trap optical tweezers

    Science.gov (United States)

    Paul, Shuvojit; Kumar, Randhir; Banerjee, Ayan

    2018-04-01

    Two-point microrheology measurements from widely separated colloidal particles approach the bulk viscosity of the host medium more reliably than corresponding single-point measurements. In addition, active microrheology offers the advantage of enhanced signal to noise over passive techniques. Recently, we reported the observation of a motional resonance induced in a probe particle in dual-trap optical tweezers when the control particle was driven externally [Paul et al., Phys. Rev. E 96, 050102(R) (2017), 10.1103/PhysRevE.96.050102]. We now demonstrate that the amplitude and phase characteristics of the motional resonance can be used as a sensitive tool for active two-point microrheology to measure the viscosity of a viscous fluid. Thus, we measure the viscosity of viscous liquids from both the amplitude and phase response of the resonance, and demonstrate that the zero crossing of the phase response of the probe particle with respect to the external drive is superior compared to the amplitude response in measuring viscosity at large particle separations. We compare our viscosity measurements with those using a commercial rheometer and obtain an agreement ˜1 % . The method can be extended to viscoelastic material where the frequency dependence of the resonance may provide further accuracy for active microrheological measurements.

  19. Exact two-point resistance, and the simple random walk on the complete graph minus N edges

    International Nuclear Information System (INIS)

    Chair, Noureddine

    2012-01-01

    An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: ► We obtain exact formulas for the two-point resistance of the complete graph minus N edges. ► We obtain also the total effective resistance of this graph. ► We modified Schwatt’s formula on trigonometrical power sum to suit our computations. ► We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. ► The first passage and mean first passage times of the random walks have exact expressions.

  20. Exact two-point resistance, and the simple random walk on the complete graph minus N edges

    Energy Technology Data Exchange (ETDEWEB)

    Chair, Noureddine, E-mail: n.chair@ju.edu.jo

    2012-12-15

    An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: Black-Right-Pointing-Pointer We obtain exact formulas for the two-point resistance of the complete graph minus N edges. Black-Right-Pointing-Pointer We obtain also the total effective resistance of this graph. Black-Right-Pointing-Pointer We modified Schwatt's formula on trigonometrical power sum to suit our computations. Black-Right-Pointing-Pointer We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. Black-Right-Pointing-Pointer The first passage and mean first passage times of the random walks have exact expressions.

  1. Feasibility of the Two-Point Method for Determining the One-Repetition Maximum in the Bench Press Exercise.

    Science.gov (United States)

    García-Ramos, Amador; Haff, Guy Gregory; Pestaña-Melero, Francisco Luis; Pérez-Castilla, Alejandro; Rojas, Francisco Javier; Balsalobre-Fernández, Carlos; Jaric, Slobodan

    2017-09-05

    This study compared the concurrent validity and reliability of previously proposed generalized group equations for estimating the bench press (BP) one-repetition maximum (1RM) with the individualized load-velocity relationship modelled with a two-point method. Thirty men (BP 1RM relative to body mass: 1.08 0.18 kg·kg -1 ) performed two incremental loading tests in the concentric-only BP exercise and another two in the eccentric-concentric BP exercise to assess their actual 1RM and load-velocity relationships. A high velocity (≈ 1 m·s -1 ) and a low velocity (≈ 0.5 m·s -1 ) was selected from their load-velocity relationships to estimate the 1RM from generalized group equations and through an individual linear model obtained from the two velocities. The directly measured 1RM was highly correlated with all predicted 1RMs (r range: 0.847-0.977). The generalized group equations systematically underestimated the actual 1RM when predicted from the concentric-only BP (P <0.001; effect size [ES] range: 0.15-0.94), but overestimated it when predicted from the eccentric-concentric BP (P <0.001; ES range: 0.36-0.98). Conversely, a low systematic bias (range: -2.3-0.5 kg) and random errors (range: 3.0-3.8 kg), no heteroscedasticity of errors (r 2 range: 0.053-0.082), and trivial ES (range: -0.17-0.04) were observed when the prediction was based on the two-point method. Although all examined methods reported the 1RM with high reliability (CV≤5.1%; ICC≥0.89), the direct method was the most reliable (CV<2.0%; ICC≥0.98). The quick, fatigue-free, and practical two-point method was able to predict the BP 1RM with high reliability and practically perfect validity, and therefore we recommend its use over generalized group equations.

  2. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time

    Science.gov (United States)

    Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida

    2018-01-01

    AIM: The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. METHODS: Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. RESULTS: Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). CONCLUSION: Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time. PMID:29731948

  3. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time.

    Science.gov (United States)

    Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida

    2018-04-15

    The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p job demands were found at Time 2. Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time.

  4. Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space

    Directory of Open Access Journals (Sweden)

    Bisheng Yang

    2016-12-01

    Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.

  5. Alien calculus and a Schwinger-Dyson equation: two-point function with a nonperturbative mass scale

    Science.gov (United States)

    Bellon, Marc P.; Clavier, Pierre J.

    2018-02-01

    Starting from the Schwinger-Dyson equation and the renormalization group equation for the massless Wess-Zumino model, we compute the dominant nonperturbative contributions to the anomalous dimension of the theory, which are related by alien calculus to singularities of the Borel transform on integer points. The sum of these dominant contributions has an analytic expression. When applied to the two-point function, this analysis gives a tame evolution in the deep euclidean domain at this approximation level, making doubtful the arguments on the triviality of the quantum field theory with positive β -function. On the other side, we have a singularity of the propagator for timelike momenta of the order of the renormalization group invariant scale of the theory, which has a nonperturbative relationship with the renormalization point of the theory. All these results do not seem to have an interpretation in terms of semiclassical analysis of a Feynman path integral.

  6. Singularity Processing Method of Microstrip Line Edge Based on LOD-FDTD

    Directory of Open Access Journals (Sweden)

    Lei Li

    2014-01-01

    Full Text Available In order to improve the performance of the accuracy and efficiency for analyzing the microstrip structure, a singularity processing method is proposed theoretically and experimentally based on the fundamental locally one-dimensional finite difference time domain (LOD-FDTD with second-order temporal accuracy (denoted as FLOD2-FDTD. The proposed method can highly improve the performance of the FLOD2-FDTD even when the conductor is embedded into more than half of the cell by the coordinate transformation. The experimental results showed that the proposed method can achieve higher accuracy when the time step size is less than or equal to 5 times of that the Courant-Friedrich-Levy (CFL condition allowed. In comparison with the previously reported methods, the proposed method for calculating electromagnetic field near microstrip line edge not only improves the efficiency, but also can provide a higher accuracy.

  7. GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY

    Directory of Open Access Journals (Sweden)

    F. Biljecki

    2016-09-01

    Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.

  8. Differences in two-point discrimination and sensory threshold in the blind between braille and text reading: a pilot study.

    Science.gov (United States)

    Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan

    2015-06-01

    [Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds.

  9. Theoretical assessment of the disparity in the electrostatic forces between two point charges and two conductive spheres of equal radii

    Science.gov (United States)

    Kolikov, Kiril

    2016-11-01

    The Coulomb's formula for the force FC of electrostatic interaction between two point charges is well known. In reality, however, interactions occur not between point charges, but between charged bodies of certain geometric form, size and physical structure. This leads to deviation of the estimated force FC from the real force F of electrostatic interaction, thus imposing the task to evaluate the disparity. In the present paper the problem is being solved theoretically for two charged conductive spheres of equal radii and arbitrary electric charges. Assessment of the deviation is given as a function of the ratio of the distance R between the spheres centers to the sum of their radii. For the purpose, relations between FC and F derived in a preceding work of ours, are employed to generalize the Coulomb's interactions. At relatively short distances between the spheres, the Coulomb force FC, as estimated to be induced by charges situated at the centers of the spheres, differ significantly from the real force F of interaction between the spheres. In the case of zero and non-zero charge we prove that with increasing the distance between the two spheres, the force F decrease rapidly, virtually to zero values, i.e. it appears to be short-acting force.

  10. A modified two-point titration method for the determination of volatile fatty acids in anaerobic systems.

    Science.gov (United States)

    Mu, Zhe-Xuan; He, Chuan-Shu; Jiang, Jian-Kai; Zhang, Jie; Yang, Hou-Yun; Mu, Yang

    2018-04-10

    The volatile fatty acids (VFA) concentration plays important roles in the rapid start-up and stable operation of anaerobic reactors. It's essential to develop a simple and accurate method to monitor the VFA concentration in the anaerobic systems. In present work, a modified two-point titration method was developed to determine the VFA concentration. The results show that VFA concentration in standard solutions estimated by the titration method coincided well with that measured by gas chromatograph, where all relative errors were lower than 5.5%. Compared with the phosphate, ammonium and sulfide subsystems, the effect of bicarbonate on the accuracy of the developed method was relatively significant. When the bicarbonate concentration varied from 0 to 8 mmol/L, the relative errors increased from 1.2% to 30% for VFA concentration at 1 mmol/L, but were within 2.0% for that at 5 mmol/L. In addition, the VFA composition affected the accuracy of the titration method to some extent. This developed titration method was further proved to be effective with practical effluents from a lab-scale anaerobic reactor under organic shock loadings and an unstable full-scale anaerobic reactor. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. THE ANISOTROPIC TWO-POINT CORRELATION FUNCTIONS OF THE NONLINEAR TRACELESS TIDAL FIELD IN THE PRINCIPAL-AXIS FRAME

    International Nuclear Information System (INIS)

    Lee, Jounghun; Hahn, Oliver; Porciani, Cristiano

    2009-01-01

    Galaxies on the largest scales of the universe are observed to be embedded in the filamentary cosmic web, which is shaped by the nonlinear tidal field. As an efficient tool to quantitatively describe the statistics of this cosmic web, we present the anisotropic two-point correlation functions of the nonlinear traceless tidal field in the principal-axis frame, which are measured using numerical data from an N-body simulation. We show that both the nonlinear density and traceless tidal fields are more strongly correlated along the directions perpendicular to the eigenvectors associated with the largest eigenvalues of the local tidal field. The correlation length scale of the traceless tidal field is found to be ∼20 h -1 Mpc, which is much larger than that of the density field ∼5 h -1 Mpc. We also provide analytic fitting formulae for the anisotropic correlation functions of the traceless tidal field, which turn out to be in excellent agreement with the numerical results. We expect that our numerical results and analytical formula are useful to disentangle cosmological information from the filamentary network of the large-scale structures.

  12. Derivation of Pal-Bell equations for two-point reactors, and its application to correlation measurements at KUCA

    International Nuclear Information System (INIS)

    Murata, Naoyuki; Yamane, Yoshihiro; Nishina, Kojiro; Shiroya, Seiji; Kanda, Keiji.

    1980-01-01

    A probability is defined for an event in which m neutrons exist at time t sub(f) in core I of a coupled-core system, originating from a neutron injected into the core I at an earlier time t; we call it P sub(I,I,m)(t sub(f)/t). Similarly, P sub(I,II,m)(t sub(f)/t) is defined as the probability for m neutrons to exist in core II of the system at time t sub(f), originating from a neutron injected into the core I at time t. Then a system of coupled equations are derived for the generating functions G sub(Ij)(z, t sub(f)/t) = μP sub(Ijm)(t sub(f)/t).z sup(m), where j = I, II. By similar procedures equations are derived for the generating functions associated with joint probability of the following events: a given combination of numbers of neutrons are detected during given series of detection time intervals by a detector inserted in one of the cores. The above two kinds of systems of equations can be regarded as a two-point version of Pal-Bell's equations. As the application of these formulations, analyzing formula for correlation measurements, namely (1) Feynman-alpha experiment and (2) Rossi-alpha experiment of Orndoff-type, are derived, and their feasibility is verified by experiments carried out at KUCA. (author)

  13. Use of digital image analysis to estimate fluid permeability of porous materials: Application of two-point correlation functions

    International Nuclear Information System (INIS)

    Berryman, J.G.; Blair, S.C.

    1986-01-01

    Scanning electron microscope images of cross sections of several porous specimens have been digitized and analyzed using image processing techniques. The porosity and specific surface area may be estimated directly from measured two-point spatial correlation functions. The measured values of porosity and image specific surface were combined with known values of electrical formation factors to estimate fluid permeability using one version of the Kozeny-Carman empirical relation. For glass bead samples with measured permeability values in the range of a few darcies, our estimates agree well ( +- 10--20%) with the measurements. For samples of Ironton-Galesville sandstone with a permeability in the range of hundreds of millidarcies, our best results agree with the laboratory measurements again within about 20%. For Berea sandstone with still lower permeability (tens of millidarcies), our predictions from the images agree within 10--30%. Best results for the sandstones were obtained by using the porosities obtained at magnifications of about 100 x (since less resolution and better statistics are required) and the image specific surface obtained at magnifications of about 500 x (since greater resolution is required)

  14. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    Science.gov (United States)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the

  15. Differences of Cutaneous Two-Point Discrimination Thresholds Among Students in Different Years of a Chiropractic Program.

    Science.gov (United States)

    Dane, Andrew B; Teh, Elaine; Reckelhoff, Kenneth E; Ying, Pee Kui

    2017-09-01

    The aim of this study was to investigate if there were differences in the two-point discrimination (2-PD) of fingers among students at different stages of a chiropractic program. This study measured 2-PD thresholds for the dominant and nondominant index finger and dominant and nondominant forearm in groups of students in a 4-year chiropractic program at the International Medical University in Kuala Lumpur, Malaysia. Measurements were made using digital calipers mounted on a modified weighing scale. Group comparisons were made among students for each year of the program (years 1, 2, 3, and 4). Analysis of the 2-PD threshold for differences among the year groups was performed with analysis of variance. The mean 2-PD threshold of the index finger was higher in the students who were in the higher year groups. Dominant-hand mean values for year 1 were 2.93 ± 0.04 mm and 1.69 ± 0.02 mm in year 4. There were significant differences at finger sites (P < .05) among all year groups compared with year 1. There were no significant differences measured at the dominant forearm between any year groups (P = .08). The nondominant fingers of the year groups 1, 2, and 4 showed better 2-PD compared with the dominant finger. There was a significant difference (P = .005) between the nondominant (1.93 ± 1.15) and dominant (2.27 ± 1.14) fingers when all groups were combined (n = 104). The results of this study demonstrated that the finger 2-PD of the chiropractic students later in the program was more precise than that of students in the earlier program. Copyright © 2017. Published by Elsevier Inc.

  16. Preferred chewing side-dependent two-point discrimination and cortical activation pattern of tactile tongue sensation.

    Science.gov (United States)

    Minato, Akiko; Ono, Takashi; Miyamoto, Jun J; Honda, Ei-ichi; Kurabayashi, Tohru; Moriyama, Keiji

    2009-10-12

    Although tactile feedback from the tongue should contribute to habitual chewing, it is unclear how the sensation of the tongue and its projection to the central nervous system differ with regard to the preferred chewing side (PCS). The purpose of this study was to investigate (1) whether the sensory threshold of the tongue differed according to the side and (2) whether the pattern of hemispheric cortical activation by tactile tongue stimulation differed, with special attention to the PCS. Twelve healthy adults participated in the study. The PCS was determined with a mandibular kinesiograph. In the behavioral study, the mean thresholds for two-point discrimination (TPD) in the anterior, canine and posterior regions on both sides of the tongue, and those between PCS and non-PCS in each region were statistically compared. In the functional magnetic resonance imaging study, tactile stimulation was delivered to either side of the tongue with acrylic balls via a mandibular splint. The runs were measured with a T2*-weighted gradient echo-type echo planar imaging sequence in a 1.5T scanner. Activated voxel numbers in the bilateral primary somatosensory cortex (S1) were statistically compared. The threshold of TPD increased in the order of the anterior, canine and posterior regions. Moreover, this threshold was significantly smaller on the PCS than on the non-PCS in both the canine and posterior regions. Moreover, the number of activated voxels in S1 contralateral to the PCS was significantly greater than that in S1 contralateral to the non-PCS. The present study shows that the PCS is associated with asymmetric tactile sensation and cortical activation of the tongue. The sensory acuity of the tongue on the PCS may play an important role in functional coupling between the jaw and tongue to maximize the efficiency of chewing.

  17. Prediction of UT1-UTC, LOD and AAM χ3 by combination of least-squares and multivariate stochastic methods

    Science.gov (United States)

    Niedzielski, Tomasz; Kosek, Wiesław

    2008-02-01

    This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.

  18. Tidal influence through LOD variations on the temporal distribution of earthquake occurrences

    Science.gov (United States)

    Varga, P.; Gambis, D.; Bizouard, Ch.; Bus, Z.; Kiszely, M.

    2006-10-01

    Stresses generated by the body tides are very small at the depth of crustal earth- quakes (~10^2 N/m2). The maximum value of the lunisolar stress within the depth range of earthquakes is 10^3 N/m2 (at depth of about 600 km). Surface loads, due to oceanic tides, in coastal areas are ~ 104 N/m2. These influences are however too small to affect the outbreak time of seismic events. Authors show the effect on time distribution of seismic activity due to ΔLOD generated by zonal tides for the case of Mf, Mm, Ssa and Sa tidal constituents can be much more effective to trigger earthquakes. According to this approach we show that the tides are not directly triggering the seismic events but through the generated length of day variations. That is the reason why in case of zonal tides a correlation of the lunisolar effect and seismic activity exists, what is not the case for the tesseral and sectorial tides.

  19. Visualizing whole-brain DTI tractography with GPU-based Tuboids and LoD management.

    Science.gov (United States)

    Petrovic, Vid; Fallon, James; Kuester, Falko

    2007-01-01

    Diffusion Tensor Imaging (DTI) of the human brain, coupled with tractography techniques, enable the extraction of large-collections of three-dimensional tract pathways per subject. These pathways and pathway bundles represent the connectivity between different brain regions and are critical for the understanding of brain related diseases. A flexible and efficient GPU-based rendering technique for DTI tractography data is presented that addresses common performance bottlenecks and image-quality issues, allowing interactive render rates to be achieved on commodity hardware. An occlusion query-based pathway LoD management system for streamlines/streamtubes/tuboids is introduced that optimizes input geometry, vertex processing, and fragment processing loads, and helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor constructed entirely on the GPU from streamline vertices, is also introduced. Unlike full streamtubes and other impostor constructs, tuboids require little to no preprocessing or extra space over the original streamline data. The supported fragment processing levels of detail range from texture-based draft shading to full raycast normal computation, Phong shading, environment mapping, and curvature-correct text labeling. The presented text labeling technique for tuboids provides adaptive, aesthetically pleasing labels that appear attached to the surface of the tubes. Furthermore, an occlusion query aggregating and scheduling scheme for tuboids is described that reduces the query overhead. Results for a tractography dataset are presented, and demonstrate that LoD-managed tuboids offer benefits over traditional streamtubes both in performance and appearance.

  20. Interannual variations in length-of-day (LOD) as a tool to assess climate variability and climate change

    Science.gov (United States)

    Lehmann, E.

    2016-12-01

    On interannual time scales the atmosphere affects significantly fluctuations in the geodetic quantity of length-of-day (LOD). This effect is directly proportional to perturbations in the relative angular momentum of the atmosphere (AAM) computed from zonal winds. During El Niño events tropospheric westerlies increase due to elevated sea surface temperatures (SST) in the Pacific inducing peak anomalies in relative AAM and correspondingly, in LOD. However, El Niño events affect LOD variations differently strong and the causes of this varying effect are yet not clear. Here, we investigate the LOD-El Niño relationship in the 20th and 21st century (1982-2100) whether the quantity of LOD can be used as a geophysical tool to assess variability and change in a future climate. In our analysis we applied a windowed discrete Fourier transform on all de-seasonalized data to remove climatic signals outside of the El Niño frequency band. LOD (data: IERS) was related in space and time to relative AAM and SSTs (data: ERA-40 reanalysis, IPCC ECHAM05-OM1 20C, A1B). Results from mapped Pearson correlation coefficients and time frequency behavior analysis identified a teleconnection pattern that we term the EN≥65%-index. The EN≥65%-index prescribes a significant change in variation in length-of-day of +65% and more related to (1) SST anomalies of >2° in the Pacific Niño region (160°E-80°W, 5°S-5°N), (2) corresponding stratospheric warming anomalies of the quasi-biennial oscillation (QBO), and (3) strong westerly winds in the lower equatorial stratosphere. In our analysis we show that the coupled atmosphere-ocean conditions prescribed in the EN≥65%-index apply to the extreme El Niño events of 19982/83 and 1997/98, and to 75% of all El Niño events in the last third of the 21st century. At that period of time the EN≥65%-index describes a projected altered base state of the equatorial Pacific that shows almost continuous El Niño conditions under climate warming.

  1. Fat suppression strategies in MR imaging of breast cancer at 3.0 T. Comparison of the two-point dixon technique and the frequency selective inversion method

    International Nuclear Information System (INIS)

    Kaneko Mikami, Wakako; Kazama, Toshiki; Sato, Hirotaka

    2013-01-01

    The purpose of this study was to compare two fat suppression methods in contrast-enhanced MR imaging of breast cancer at 3.0 T: the two-point Dixon method and the frequency selective inversion method. Forty female patients with breast cancer underwent contrast-enhanced three-dimensional T1-weighted MR imaging at 3.0 T. Both the two-point Dixon method and the frequency selective inversion method were applied. Quantitative analyses of the residual fat signal-to-noise ratio and the contrast noise ratio (CNR) of lesion-to-breast parenchyma, lesion-to-fat, and parenchyma-to-fat were performed. Qualitative analyses of the uniformity of fat suppression, image contrast, and the visibility of breast lesions and axillary metastatic adenopathy were performed. The signal-to-noise ratio was significantly lower in the two-point Dixon method (P<0.001). All CNR values were significantly higher in the two-point Dixon method (P<0.001 and P=0.001, respectively). According to qualitative analysis, both the uniformity of fat suppression and image contrast with the two-point Dixon method were significantly higher (P<0.001 and P=0.002, respectively). Visibility of breast lesions and metastatic adenopathy was significantly better in the two-point Dixon method (P<0.001 and P=0.03, respectively). The two-point Dixon method suppressed the fat signal more potently and improved contrast and visibility of the breast lesions and axillary adenopathy. (author)

  2. TopFed: TCGA tailored federated query processing and linking to LOD.

    Science.gov (United States)

    Saleem, Muhammad; Padmanabhuni, Shanmukha S; Ngomo, Axel-Cyrille Ngonga; Iqbal, Aftab; Almeida, Jonas S; Decker, Stefan; Deus, Helena F

    2014-01-01

    The Cancer Genome Atlas (TCGA) is a multidisciplinary, multi-institutional effort to catalogue genetic mutations responsible for cancer using genome analysis techniques. One of the aims of this project is to create a comprehensive and open repository of cancer related molecular analysis, to be exploited by bioinformaticians towards advancing cancer knowledge. However, devising bioinformatics applications to analyse such large dataset is still challenging, as it often requires downloading large archives and parsing the relevant text files. Therefore, it is making it difficult to enable virtual data integration in order to collect the critical co-variates necessary for analysis. We address these issues by transforming the TCGA data into the Semantic Web standard Resource Description Format (RDF), link it to relevant datasets in the Linked Open Data (LOD) cloud and further propose an efficient data distribution strategy to host the resulting 20.4 billion triples data via several SPARQL endpoints. Having the TCGA data distributed across multiple SPARQL endpoints, we enable biomedical scientists to query and retrieve information from these SPARQL endpoints by proposing a TCGA tailored federated SPARQL query processing engine named TopFed. We compare TopFed with a well established federation engine FedX in terms of source selection and query execution time by using 10 different federated SPARQL queries with varying requirements. Our evaluation results show that TopFed selects on average less than half of the sources (with 100% recall) with query execution time equal to one third to that of FedX. With TopFed, we aim to offer biomedical scientists a single-point-of-access through which distributed TCGA data can be accessed in unison. We believe the proposed system can greatly help researchers in the biomedical domain to carry out their research effectively with TCGA as the amount and diversity of data exceeds the ability of local resources to handle its retrieval and

  3. 3D BUILDING MODELING IN LOD2 USING THE CITYGML STANDARD

    Directory of Open Access Journals (Sweden)

    D. Preka

    2016-10-01

    Full Text Available Over the last decade, scientific research has been increasingly focused on the third dimension in all fields and especially in sciences related to geographic information, the visualization of natural phenomena and the visualization of the complex urban reality. The field of 3D visualization has achieved rapid development and dynamic progress, especially in urban applications, while the technical restrictions on the use of 3D information tend to subside due to advancements in technology. A variety of 3D modeling techniques and standards has already been developed, as they gain more traction in a wide range of applications. Such a modern standard is the CityGML, which is open and allows for sharing and exchanging of 3D city models. Within the scope of this study, key issues for the 3D modeling of spatial objects and cities are considered and specifically the key elements and abilities of CityGML standard, which is used in order to produce a 3D model of 14 buildings that constitute a block at the municipality of Kaisariani, Athens, in Level of Detail 2 (LoD2, as well as the corresponding relational database. The proposed tool is based upon the 3DCityDB package in tandem with a geospatial database (PostgreSQL w/ PostGIS 2.0 extension. The latter allows for execution of complex queries regarding the spatial distribution of data. The system is implemented in order to facilitate a real-life scenario in a suburb of Athens.

  4. A multi-center field study of two point-of-care tests for circulating Wuchereria bancrofti antigenemia in Africa.

    Directory of Open Access Journals (Sweden)

    Cédric B Chesnais

    2017-09-01

    Full Text Available The Global Programme to Eliminate Lymphatic Filariasis uses point-of-care tests for circulating filarial antigenemia (CFA to map endemic areas and for monitoring and evaluating the success of mass drug administration (MDA programs. We compared the performance of the reference BinaxNOW Filariasis card test (ICT, introduced in 1997 with the Alere Filariasis Test Strip (FTS, introduced in 2013 in 5 endemic study sites in Africa.The tests were compared prior to MDA in two study sites (Congo and Côte d'Ivoire and in three sites that had received MDA (DRC and 2 sites in Liberia. Data were analyzed with regard to % positivity, % agreement, and heterogeneity. Models evaluated potential effects of age, gender, and blood microfilaria (Mf counts in individuals and effects of endemicity and history of MDA at the village level as potential factors linked to higher sensitivity of the FTS. Lastly, we assessed relationships between CFA scores and Mf in pre- and post-MDA settings.Paired test results were available for 3,682 individuals. Antigenemia rates were 8% and 22% higher by FTS than by ICT in pre-MDA and in post-MDA sites, respectively. FTS/ICT ratios were higher in areas with low infection rates. The probability of having microfilaremia was much higher in persons with CFA scores >1 in untreated areas. However, this was not true in post-MDA settings.This study has provided extensive new information on the performance of the FTS compared to ICT in Africa and it has confirmed the increased sensitivity of FTS reported in prior studies. Variability in FTS/ICT was related in part to endemicity level, history of MDA, and perhaps to the medications used for MDA. These results suggest that FTS should be superior to ICT for mapping, for transmission assessment surveys, and for post-MDA surveillance.

  5. Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery

    Science.gov (United States)

    Qin, Rongjun

    2014-10-01

    Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with "change", "non-change" and "uncertain change" status labeled through a voting strategy. The "uncertain changes" are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are

  6. Wavelet analysis of interannual LOD, AAM, and ENSO: 1997-98 El Niño and 1998-99 La Niña signals

    Science.gov (United States)

    Zhou, Y. H.; Zheng, D. W.; Liao, X. H.

    2001-05-01

    On the basis of the data series of the length of day (LOD), the atmospheric angular momentum (AAM) and the Southern Oscillation Index (SOI) for January 1970-June 1999, the relationship among Interannual LOD, AAM, and the EL Niño/Southern Oscillation (ENSO) is analyzed by the wavelet transform method. The results suggest that they have similar time-varying spectral structures. The signals of 1997-98 El Niño and 1998-99 La Niña events can be detected from the LOD or AAM data.

  7. Dark Energy Survey Year 1 Results: Methodology and Projections for Joint Analysis of Galaxy Clustering, Galaxy Lensing, and CMB Lensing Two-point Functions

    Energy Technology Data Exchange (ETDEWEB)

    Giannantonio, T.; et al.

    2018-02-14

    Optical imaging surveys measure both the galaxy density and the gravitational lensing-induced shear fields across the sky. Recently, the Dark Energy Survey (DES) collaboration used a joint fit to two-point correlations between these observables to place tight constraints on cosmology (DES Collaboration et al. 2017). In this work, we develop the methodology to extend the DES Collaboration et al. (2017) analysis to include cross-correlations of the optical survey observables with gravitational lensing of the cosmic microwave background (CMB) as measured by the South Pole Telescope (SPT) and Planck. Using simulated analyses, we show how the resulting set of five two-point functions increases the robustness of the cosmological constraints to systematic errors in galaxy lensing shear calibration. Additionally, we show that contamination of the SPT+Planck CMB lensing map by the thermal Sunyaev-Zel'dovich effect is a potentially large source of systematic error for two-point function analyses, but show that it can be reduced to acceptable levels in our analysis by masking clusters of galaxies and imposing angular scale cuts on the two-point functions. The methodology developed here will be applied to the analysis of data from the DES, the SPT, and Planck in a companion work.

  8. The Gaussian cell two-point 'energy-like' equation : application to large-scale galaxy redshift and peculiar motion surveys

    NARCIS (Netherlands)

    Zaroubi, S; Branchini, E

    2005-01-01

    We introduce a simple linear equation relating the line-of-sight peculiar-velocity and density contrast correlation functions. The relation, which we call the Gaussian cell two-point 'energy-like' equation, is valid at the distant-observer limit and requires Gaussian smoothed fields. In the variance

  9. A recoding scheme for X-linked and pseudoautosomal loci to be used with computer programs for autosomal LOD-score analysis.

    Science.gov (United States)

    Strauch, Konstantin; Baur, Max P; Wienker, Thomas F

    2004-01-01

    We present a recoding scheme that allows for a parametric multipoint X-chromosomal linkage analysis of dichotomous traits in the context of a computer program for autosomes that can use trait models with imprinting. Furthermore, with this scheme, it is possible to perform a joint multipoint analysis of X-linked and pseudoautosomal loci. It is required that (1) the marker genotypes of all female nonfounders are available and that (2) there are no male nonfounders who have daughters in the pedigree. The second requirement does not apply if the trait locus is pseudoautosomal. The X-linked marker loci are recorded by adding a dummy allele to the males' hemizygous genotypes. For modelling an X-linked trait locus, five different liability classes are defined, in conjunction with a paternal imprinting model for male nonfounders. The formulation aims at the mapping of a diallelic trait locus relative to an arbitrary number of codominant markers with known genetic distances, in cases where a program for a genuine X-chromosomal analysis is not available. 2004 S. Karger AG, Basel.

  10. Modulation of the SSTA decadal variation on ENSO events and relationships of SSTA With LOD,SOI, etc

    Science.gov (United States)

    Liao, D. C.; Zhou, Y. H.; Liao, X. H.

    2007-01-01

    Interannual and decadal components of the length of day (LOD), Southern Oscillation Index (SOI) and Sea Surface Temperature anomaly (SSTA) in Nino regions are extracted by band-pass filtering, and used for research of the modulation of the SSTA on the ENSO events. Results show that besides the interannual components, the decadal components in SSTA have strong impacts on monitoring and representing of the ENSO events. When the ENSO events are strong, the modulation of the decadal components of the SSTA tends to prolong the life-time of the events and enlarge the extreme anomalies of the SST, while the ENSO events, which are so weak that they can not be detected by the interannual components of the SSTA, can also be detected with the help of the modulation of the SSTA decadal components. The study further draws attention to the relationships of the SSTA interannual and decadal components with those of LOD, SOI, both of the sea level pressure anomalies (SLPA) and the trade wind anomalies (TWA) in tropic Pacific, and also with those of the axial components of the atmospheric angular momentum (AAM) and oceanic angular momentum (OAM). Results of the squared coherence and coherent phases among them reveal close connections with the SSTA and almost all of the parameters mentioned above on the interannual time scales, while on the decadal time scale significant connections are among the SSTA and SOI, SLPA, TWA, ?3w and ?3w+v as well, and slight weaker connections between the SSTA and LOD, ?3pib and ?3bp

  11. Visualizing dynamic geosciences phenomena using an octree-based view-dependent LOD strategy within virtual globes

    Science.gov (United States)

    Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo

    2011-09-01

    Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.

  12. Allegheny County Walk Scores

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Walk Score measures the walkability of any address using a patented system developed by the Walk Score company. For each 2010 Census Tract centroid, Walk Score...

  13. On the two-point correlation functions for the Uq[SU(2)]invariant spin one-half Heisenberg chain at roots of unity

    International Nuclear Information System (INIS)

    Hinrichsen, H.; Scheunert, M.

    1993-10-01

    Using U q [SU(2)] tensor calculus we compute the two-point scalar operators (TPSO), their averages on the ground-state give the two-point correlation functions. The TPSOs are identified as elements of the Temperley-Lieb algebra and a recurrence relation is given for them. We have not tempted to derive the analytic expressions for the correlation functions in the general case but got some partial results. For q=e iπ/3 , all correlation functions are (trivially) zero, for q=e iπ/4 , they are related in the continuum to the correlation functions of left-handed and right-handed Majorana fields in the half plane coupled by the boundary condition. In the case q=e iπ/6 , one gets the correlation functions of Mittag's and Stephen's parafermions for the three-state Potts model. A diagrammatic approach to compute correlation functions is also presented. (orig.)

  14. Comparison of apparent diffusion coefficients (ADCs) between two-point and multi-point analyses using high-B-value diffusion MR imaging

    International Nuclear Information System (INIS)

    Kubo, Hitoshi; Maeda, Masayuki; Araki, Akinobu

    2001-01-01

    We evaluated the accuracy of calculating apparent diffusion coefficients (ADCs) using high-B-value diffusion images. Echo planar diffusion-weighted MR images were obtained at 1.5 tesla in five standard locations in six subjects using gradient strengths corresponding to B values from 0 to 3000 s/mm 2 . Estimation of ADCs was made using two methods: a nonlinear regression model using measurements from a full set of B values (multi-point method) and linear estimation using B values of 0 and max only (two-point method). A high correlation between the two methods was noted (r=0.99), and the mean percentage differences were -0.53% and 0.53% in phantom and human brain, respectively. These results suggest there is little error in estimating ADCs calculated by the two-point technique using high-B-value diffusion MR images. (author)

  15. LOD lab: Experiments at LOD scale

    NARCIS (Netherlands)

    Rietveld, Laurens; Beek, Wouter; Schlobach, Stefan

    2015-01-01

    Contemporary Semantic Web research is in the business of optimizing algorithms for only a handful of datasets such as DBpedia, BSBM, DBLP and only a few more. This means that current practice does not generally take the true variety of Linked Data into account. With hundreds of thousands of datasets

  16. The Little Ice Age was 1.0-1.5 °C cooler than current warm period according to LOD and NAO

    Science.gov (United States)

    Mazzarella, Adriano; Scafetta, Nicola

    2018-02-01

    We study the yearly values of the length of day (LOD, 1623-2016) and its link to the zonal index (ZI, 1873-2003), the Northern Atlantic oscillation index (NAO, 1659-2000) and the global sea surface temperature (SST, 1850-2016). LOD is herein assumed to be mostly the result of the overall circulations occurring within the ocean-atmospheric system. We find that LOD is negatively correlated with the global SST and with both the integral function of ZI and NAO, which are labeled as IZI and INAO. A first result is that LOD must be driven by a climatic change induced by an external (e.g. solar/astronomical) forcing since internal variability alone would have likely induced a positive correlation among the same variables because of the conservation of the Earth's angular momentum. A second result is that the high correlation among the variables implies that the LOD and INAO records can be adopted as global proxies to reconstruct past climate change. Tentative global SST reconstructions since the seventeenth century suggest that around 1700, that is during the coolest period of the Little Ice Age (LIA), SST could have been about 1.0-1.5 °C cooler than the 1950-1980 period. This estimated LIA cooling is greater than what some multiproxy global climate reconstructions suggested, but it is in good agreement with other more recent climate reconstructions including those based on borehole temperature data.

  17. Differential equations for correlators on the torus: Two-point correlation function of isospin-1 primary fields in the k=3 SU(2) WZW theory

    International Nuclear Information System (INIS)

    Durganandini, P.

    1990-01-01

    We systematize the procedure developed by Mathur, Mukhi and Sen to derive differential equations for correlators in rational conformal field theories on the torus in those cases when it is necessary to study not only leading-order behaviour but also the nonleading behaviour of the solutions in the asymptotic limit Imτ→∞, Imz→∞. As an illustration, we derive the differential equation for the two-point correlator of the isospin-1 primary fields in the k=3 SU(2) WZW model on the torus. (orig.)

  18. Assessing the effect of the relative atmospheric angular momentum (AAM) on length-of-day (LOD) variations under climate warming

    Science.gov (United States)

    Lehmann, E.; Hansen, F.; Ulbrich, U.; Nevir, P.; Leckebusch, G. C.

    2009-04-01

    While most studies on model-projected future climate warming discuss climatological quantities, this study investigates the response of the relative atmospheric angular momentum (AAM) to climate warming for the 21th century and discusses its possible effects on future length-of-day variations. Following the derivation of the dynamic relation between atmosphere and solid earth by Barnes et al. (Proc. Roy. Soc., 1985) this study relates the axial atmospheric excitation function X3 to changes in length-of-day that are proportional to variations in zonal winds. On interannual time scales changes in the relative AAM (ERA40 reanalyses) are well correlated with observed length-of-day (LOD, IERS EOP CO4) variability (r=0.75). The El Niño-Southern Oscillation (ENSO) is a prominent coupled ocean-atmosphere phenomenon to cause global climate variability on interannual time scales. Correspondingly, changes in observed LOD relate to ENSO due to observed strong wind anomalies. This study investigates the varying effect of AAM anomalies on observed LOD by relating AAM to variations to ENSO teleconnections (sea surface temperatures, SSTs) and the Pacific North America (PNA) oscillation for the 20th and 21st century. The differently strong effect of strong El Niño events (explained variance 71%-98%) on present time (1962-2000) observed LOD-AAM relation can be associated to variations in location and strength of jet streams in the upper troposphere. Correspondingly, the relation between AAM and SSTs in the NIÑO 3.4 region also varies between explained variances of 15% to 73%. Recent coupled ocean-atmosphere projections on future climate warming suggest changes in frequency and amplitude of ENSO events. Since changes in the relative AAM indicate shifts in large-scale atmospheric circulation patterns due to climate change, AAM - ENSO relations are assessed in coupled atmosphere-ocean (ECHAM5-OM1) climate warming projections (A1B) for the 21st century. A strong rise (+31%) in

  19. Combinación de Valores de Longitud del Día (LOD) según ventanas de frecuencia

    Science.gov (United States)

    Fernández, L. I.; Arias, E. F.; Gambis, D.

    El concepto de solución combinada se sustenta en el hecho de que las diferentes series temporales de datos derivadas a partir de distintas técnicas de la Geodesia Espacial son muy disimiles entre si. Las principales diferencias, fácilmente detectables, entre las distintas series son: diferente intervalo de muestreo, extensión temporal y calidad. Los datos cubren un período reciente de 27 meses (julio 96-oct. 98). Se utilizaron estimaciones de la longitud del día (LOD) originadas en 10 centros operativos del IERS (International Earth Rotation Service) a partir de las técnicas GPS (Global Positioning System) y SLR (Satellite Laser Ranging). La serie temporal combinada así obtenida se comparó con la solución EOP (Parámetros de la Orientación Terrestre) combinada multi-técnica derivada por el IERS (C04). El comportamiento del ruido en LOD para todas las técnicas mostró ser dependiente de la frecuencia (Vondrak, 1998). Por esto, las series dato se dividieron en ventanas de frecuencia, luego de haberles removido bies y tendencias. Luego, se asignaron diferentes factores de peso a cada ventana discriminando por técnicas. Finalmente estas soluciones parcialmente combinadas se mezclaron para obtener la solución combinada final. Sabemos que la mejor solución combinada tendrá una precisión menor que la precisión de las series temporales de datos que la originaron. Aun así, la importancia de una serie combinada confiable de EOP, esto es, de una precisión aceptable y libre de sistematismos evidentes, radica en la necesidad de una base de datos EOP de referencia para el estudio de fenómenos geofísicos que motivan variaciones en la rotación terrestre.

  20. The Zhongshan Score

    Science.gov (United States)

    Zhou, Lin; Guo, Jianming; Wang, Hang; Wang, Guomin

    2015-01-01

    Abstract In the zero ischemia era of nephron-sparing surgery (NSS), a new anatomic classification system (ACS) is needed to adjust to these new surgical techniques. We devised a novel and simple ACS, and compared it with the RENAL and PADUA scores to predict the risk of NSS outcomes. We retrospectively evaluated 789 patients who underwent NSS with available imaging between January 2007 and July 2014. Demographic and clinical data were assessed. The Zhongshan (ZS) score consisted of three parameters. RENAL, PADUA, and ZS scores are divided into three groups, that is, high, moderate, and low scores. For operative time (OT), significant differences were seen between any two groups of ZS score and PADUA score (all P RENAL showed no significant difference between moderate and high complexity in OT, WIT, estimated blood loss, and increase in SCr. Compared with patients with a low score of ZS, those with a high or moderate score had 8.1-fold or 3.3-fold higher risk of surgical complications, respectively (all P RENAL score, patients with a high or moderate score had 5.7-fold or 1.9-fold higher risk of surgical complications, respectively (all P RENAL and PADUA scores. ZS score could be used to reflect the surgical complexity and predict the risk of surgical complications in patients undergoing NSS. PMID:25654399

  1. Using Parameters of Dynamic Pulse Function for 3d Modeling in LOD3 Based on Random Textures

    Science.gov (United States)

    Alizadehashrafi, B.

    2015-12-01

    The pulse function (PF) is a technique based on procedural preprocessing system to generate a computerized virtual photo of the façade with in a fixed size square(Alizadehashrafi et al., 2009, Musliman et al., 2010). Dynamic Pulse Function (DPF) is an enhanced version of PF which can create the final photo, proportional to real geometry. This can avoid distortion while projecting the computerized photo on the generated 3D model(Alizadehashrafi and Rahman, 2013). The challenging issue that might be handled for having 3D model in LoD3 rather than LOD2, is the final aim that have been achieved in this paper. In the technique based DPF the geometries of the windows and doors are saved in an XML file schema which does not have any connections with the 3D model in LoD2 and CityGML format. In this research the parameters of Dynamic Pulse Functions are utilized via Ruby programming language in SketchUp Trimble to generate (exact position and deepness) the windows and doors automatically in LoD3 based on the same concept of DPF. The advantage of this technique is automatic generation of huge number of similar geometries e.g. windows by utilizing parameters of DPF along with defining entities and window layers. In case of converting the SKP file to CityGML via FME software or CityGML plugins the 3D model contains the semantic database about the entities and window layers which can connect the CityGML to MySQL(Alizadehashrafi and Baig, 2014). The concept behind DPF, is to use logical operations to project the texture on the background image which is dynamically proportional to real geometry. The process of projection is based on two vertical and horizontal dynamic pulses starting from upper-left corner of the background wall in down and right directions respectively based on image coordinate system. The logical one/zero on the intersections of two vertical and horizontal dynamic pulses projects/does not project the texture on the background image. It is possible to define

  2. Automated two-point dixon screening for the evaluation of hepatic steatosis and siderosis: comparison with R2*-relaxometry and chemical shift-based sequences

    Energy Technology Data Exchange (ETDEWEB)

    Henninger, B.; Rauch, S.; Schocke, M.; Jaschke, W.; Kremser, C. [Medical University of Innsbruck, Department of Radiology, Innsbruck (Austria); Zoller, H. [Medical University of Innsbruck, Department of Internal Medicine, Innsbruck (Austria); Kannengiesser, S. [Siemens AG, Healthcare Sector, MR Applications Development, Erlangen (Germany); Zhong, X. [Siemens Healthcare, MR R and D Collaborations, Atlanta, GA (United States); Reiter, G. [Siemens AG, Healthcare Sector, MR R and D Collaborations, Graz (Austria)

    2015-05-01

    To evaluate the automated two-point Dixon screening sequence for the detection and estimated quantification of hepatic iron and fat compared with standard sequences as a reference. One hundred and two patients with suspected diffuse liver disease were included in this prospective study. The following MRI protocol was used: 3D-T1-weighted opposed- and in-phase gradient echo with two-point Dixon reconstruction and dual-ratio signal discrimination algorithm (''screening'' sequence); fat-saturated, multi-gradient-echo sequence with 12 echoes; gradient-echo T1 FLASH opposed- and in-phase. Bland-Altman plots were generated and correlation coefficients were calculated to compare the sequences. The screening sequence diagnosed fat in 33, iron in 35 and a combination of both in 4 patients. Correlation between R2* values of the screening sequence and the standard relaxometry was excellent (r = 0.988). A slightly lower correlation (r = 0.978) was found between the fat fraction of the screening sequence and the standard sequence. Bland-Altman revealed systematically lower R2* values obtained from the screening sequence and higher fat fraction values obtained with the standard sequence with a rather high variability in agreement. The screening sequence is a promising method with fast diagnosis of the predominant liver disease. It is capable of estimating the amount of hepatic fat and iron comparable to standard methods. (orig.)

  3. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    Science.gov (United States)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  4. Automated two-point dixon screening for the evaluation of hepatic steatosis and siderosis: comparison with R2*-relaxometry and chemical shift-based sequences

    International Nuclear Information System (INIS)

    Henninger, B.; Rauch, S.; Schocke, M.; Jaschke, W.; Kremser, C.; Zoller, H.; Kannengiesser, S.; Zhong, X.; Reiter, G.

    2015-01-01

    To evaluate the automated two-point Dixon screening sequence for the detection and estimated quantification of hepatic iron and fat compared with standard sequences as a reference. One hundred and two patients with suspected diffuse liver disease were included in this prospective study. The following MRI protocol was used: 3D-T1-weighted opposed- and in-phase gradient echo with two-point Dixon reconstruction and dual-ratio signal discrimination algorithm (''screening'' sequence); fat-saturated, multi-gradient-echo sequence with 12 echoes; gradient-echo T1 FLASH opposed- and in-phase. Bland-Altman plots were generated and correlation coefficients were calculated to compare the sequences. The screening sequence diagnosed fat in 33, iron in 35 and a combination of both in 4 patients. Correlation between R2* values of the screening sequence and the standard relaxometry was excellent (r = 0.988). A slightly lower correlation (r = 0.978) was found between the fat fraction of the screening sequence and the standard sequence. Bland-Altman revealed systematically lower R2* values obtained from the screening sequence and higher fat fraction values obtained with the standard sequence with a rather high variability in agreement. The screening sequence is a promising method with fast diagnosis of the predominant liver disease. It is capable of estimating the amount of hepatic fat and iron comparable to standard methods. (orig.)

  5. Determination of Oversulphated Chondroitin Sulphate and Dermatan Sulphate in unfractionated heparin by (1)H-NMR - Collaborative study for quantification and analytical determination of LoD.

    Science.gov (United States)

    McEwen, I; Mulloy, B; Hellwig, E; Kozerski, L; Beyer, T; Holzgrabe, U; Wanko, R; Spieser, J-M; Rodomonte, A

    2008-12-01

    Oversulphated Chondroitin Sulphate (OSCS) and Dermatan Sulphate (DS) in unfractionated heparins can be identified by nuclear magnetic resonance spectrometry (NMR). The limit of detection (LoD) of OSCS is 0.1% relative to the heparin content. This LoD is obtained at a signal-to-noise ratio (S/N) of 2000:1 of the heparin methyl signal. Quantification is best obtained by comparing peak heights of the OSCS and heparin methyl signals. Reproducibility of less than 10% relative standard deviation (RSD) has been obtained. The accuracy of quantification was good.

  6. Geophysical excitation of LOD/UT1 estimated from the output of the global circulation models of the atmosphere - ERA-40 reanalysis and of the ocean - OMCT

    Science.gov (United States)

    Korbacz, A.; Brzeziński, A.; Thomas, M.

    2008-04-01

    We use new estimates of the global atmospheric and oceanic angular momenta (AAM, OAM) to study the influence on LOD/UT1. The AAM series was calculated from the output fields of the atmospheric general circulation model ERA-40 reanalysis. The OAM series is an outcome of global ocean model OMCT simulation driven by global fields of the atmospheric parameters from the ERA- 40 reanalysis. The excitation data cover the period between 1963 and 2001. Our calculations concern atmospheric and oceanic effects in LOD/UT1 over the periods between 20 days and decades. Results are compared to those derived from the alternative AAM/OAM data sets.

  7. Finanční analýza sportovního střediska Loděnice Trója UK FTVS

    OpenAIRE

    Ocman, Josef

    2010-01-01

    Title: The financial analysis of sport centre Loděnice Troja FTVS UK Annotation: The main goal of the project is to detect prosperity and utilization of sport centre Loděnice Troja FTVS UK on base of evaluation of economy of the departments and his subdepartments. The goal is achived by an analyse of accouting data and with help of metod of financial analysis. . The project was firmed up on base of order of management FTVS UK. Keywords: Financial analysis, municipal firm, ratio, calculation 3

  8. Two-point vs multipoint sample collection for the analysis of energy expenditure by use of the doubly labeled water method

    International Nuclear Information System (INIS)

    Welle, S.

    1990-01-01

    Energy expenditure over a 2-wk period was determined by the doubly labeled water (2H2(18)O) method in nine adults. When daily samples were analyzed, energy expenditure was 2859 +/- 453 kcal/d (means +/- SD); when only the first and last time points were considered, the mean calculated energy expenditure was not significantly different (2947 +/- 430 kcal/d). An analysis of theoretical cases in which isotope flux is not constant indicates that the multipoint method can cause errors in the calculation of average isotope fluxes, but these are generally small. Simulations of the effect of analytical error indicate that increasing the number of replicates on two points reduces the impact of technical errors more effectively than does performing single analyses on multiple samples. It appears that generally there is no advantage to collecting frequent samples when the 2H2(18)O method is used to estimate energy expenditure in adult humans

  9. Dynamics of single photon transport in a one-dimensional waveguide two-point coupled with a Jaynes-Cummings system

    KAUST Repository

    Wang, Yuwen

    2016-09-22

    We study the dynamics of an ultrafast single photon pulse in a one-dimensional waveguide two-point coupled with a Jaynes-Cummings system. We find that for any single photon input the transmissivity depends periodically on the separation between the two coupling points. For a pulse containing many plane wave components it is almost impossible to suppress transmission, especially when the width of the pulse is less than 20 times the period. In contrast to plane wave input, the waveform of the pulse can be modified by controlling the coupling between the waveguide and Jaynes-Cummings system. Tailoring of the waveform is important for single photon manipulation in quantum informatics. © The Author(s) 2016.

  10. Automated two-point dixon screening for the evaluation of hepatic steatosis and siderosis: comparison with R2-relaxometry and chemical shift-based sequences.

    Science.gov (United States)

    Henninger, B; Zoller, H; Rauch, S; Schocke, M; Kannengiesser, S; Zhong, X; Reiter, G; Jaschke, W; Kremser, C

    2015-05-01

    To evaluate the automated two-point Dixon screening sequence for the detection and estimated quantification of hepatic iron and fat compared with standard sequences as a reference. One hundred and two patients with suspected diffuse liver disease were included in this prospective study. The following MRI protocol was used: 3D-T1-weighted opposed- and in-phase gradient echo with two-point Dixon reconstruction and dual-ratio signal discrimination algorithm ("screening" sequence); fat-saturated, multi-gradient-echo sequence with 12 echoes; gradient-echo T1 FLASH opposed- and in-phase. Bland-Altman plots were generated and correlation coefficients were calculated to compare the sequences. The screening sequence diagnosed fat in 33, iron in 35 and a combination of both in 4 patients. Correlation between R2* values of the screening sequence and the standard relaxometry was excellent (r = 0.988). A slightly lower correlation (r = 0.978) was found between the fat fraction of the screening sequence and the standard sequence. Bland-Altman revealed systematically lower R2* values obtained from the screening sequence and higher fat fraction values obtained with the standard sequence with a rather high variability in agreement. The screening sequence is a promising method with fast diagnosis of the predominant liver disease. It is capable of estimating the amount of hepatic fat and iron comparable to standard methods. • MRI plays a major role in the clarification of diffuse liver disease. • The screening sequence was introduced for the assessment of diffuse liver disease. • It is a fast and automated algorithm for the evaluation of hepatic iron and fat. • It is capable of estimating the amount of hepatic fat and iron.

  11. How to score questionnaires

    NARCIS (Netherlands)

    Hofstee, W.K.B.; Ten Berge, J.M.F.; Hendriks, A.A.J.

    The standard practice in scoring questionnaires consists of adding item scores and standardizing these sums. We present a set of alternative procedures, consisting of (a) correcting for the acquiescence variance that disturbs the structure of the questionnaire; (b) establishing item weights through

  12. SCORE - A DESCRIPTION.

    Science.gov (United States)

    SLACK, CHARLES W.

    REINFORCEMENT AND ROLE-REVERSAL TECHNIQUES ARE USED IN THE SCORE PROJECT, A LOW-COST PROGRAM OF DELINQUENCY PREVENTION FOR HARD-CORE TEENAGE STREET CORNER BOYS. COMMITTED TO THE BELIEF THAT THE BOYS HAVE THE POTENTIAL FOR ETHICAL BEHAVIOR, THE SCORE WORKER FOLLOWS B.F. SKINNER'S THEORY OF OPERANT CONDITIONING AND REINFORCES THE DELINQUENT'S GOOD…

  13. Establishing an Appropriate Level of Detail (LoD) for a Building Information Model (BIM) - West Block, Parliament Hill, Ottawa, Canada

    Science.gov (United States)

    Fai, S.; Rafeiro, J.

    2014-05-01

    In 2011, Public Works and Government Services Canada (PWGSC) embarked on a comprehensive rehabilitation of the historically significant West Block of Canada's Parliament Hill. With over 17 thousand square meters of floor space, the West Block is one of the largest projects of its kind in the world. As part of the rehabilitation, PWGSC is working with the Carleton Immersive Media Studio (CIMS) to develop a building information model (BIM) that can serve as maintenance and life-cycle management tool once construction is completed. The scale and complexity of the model have presented many challenges. One of these challenges is determining appropriate levels of detail (LoD). While still a matter of debate in the development of international BIM standards, LoD is further complicated in the context of heritage buildings because we must reconcile the LoD of the BIM with that used in the documentation process (terrestrial laser scan and photogrammetric survey data). In this paper, we will discuss our work to date on establishing appropriate LoD within the West Block BIM that will best serve the end use. To facilitate this, we have developed a single parametric model for gothic pointed arches that can be used for over seventy-five unique window types present in the West Block. Using the AEC (CAN) BIM as a reference, we have developed a workflow to test each of these window types at three distinct levels of detail. We have found that the parametric Gothic arch significantly reduces the amount of time necessary to develop scenarios to test appropriate LoD.

  14. 75 FR 53730 - Culturally Significant Object Imported for Exhibition Determinations: “The Roman Mosaic from Lod...

    Science.gov (United States)

    2010-09-01

    ...Notice is hereby given of the following determinations: Pursuant to the authority vested in me by the Act of October 19, 1965 (79 Stat. 985; 22 U.S.C. 2459), Executive Order 12047 of March 27, 1978, the Foreign Affairs Reform and Restructuring Act of 1998 (112 Stat. 2681, et seq.; 22 U.S.C. 6501 note, et seq.), Delegation of Authority No. 234 of October 1, 1999, and Delegation of Authority No. 236-3 of August 28, 2000, I hereby determine that the object to be included in the exhibition ``The Roman Mosaic from Lod, Israel,'' imported from abroad for temporary exhibition within the United States, is of cultural significance. The object is imported pursuant to a loan agreement with the foreign owner or custodian. I also determine that the exhibition or display of the exhibit object at the Metropolitan Museum of Art, New York, New York, from on or about September 28, 2010, until on or about April 3, 2011, the Legion of Honor Museum, San Francisco, California, from on or about April 23, 2011, until on or about July 24, 2011, and at possible additional exhibitions or venues yet to be determined, is in the national interest. I have ordered that Public Notice of these Determinations be published in the Federal Register.

  15. The Bandim tuberculosis score

    DEFF Research Database (Denmark)

    Rudolf, Frauke; Joaquim, Luis Carlos; Vieira, Cesaltina

    2013-01-01

    Background: This study was carried out in Guinea-Bissau ’ s capital Bissau among inpatients and outpatients attending for tuberculosis (TB) treatment within the study area of the Bandim Health Project, a Health and Demographic Surveillance Site. Our aim was to assess the variability between 2...... physicians in performing the Bandim tuberculosis score (TBscore), a clinical severity score for pulmonary TB (PTB), and to compare it to the Karnofsky performance score (KPS). Method : From December 2008 to July 2009 we assessed the TBscore and the KPS of 100 PTB patients at inclusion in the TB cohort and...

  16. Comparison of clinical outcomes of multi-point umbrella suturing and single purse suturing with two-point traction after procedure for prolapse and hemorrhoids (PPH) surgery.

    Science.gov (United States)

    Jiang, Huiyong; Hao, Xiuyan; Xin, Ying; Pan, Youzhen

    2017-11-01

    To compare the clinical outcomes of multipoint umbrella suture and single-purse suture with two-point traction after procedure for prolapse and hemorrhoids surgery (PPH) for the treatment of mixed hemorrhoids. Ninety patients were randomly divided into a PPH plus single-purse suture group (Group A) and a PPH plus multipoint umbrella suture (Group B). All operations were performed by an experienced surgeon. Operation time, width of the specimen, hemorrhoids retraction extent, postoperative pain, postoperative bleeding, and length of hospitalization were recorded and compared. Statistical analysis was conducted by t-test and χ2 test. There were no significant differences in sex, age, course of disease, and degree of prolapse of hemorrhoids between the two groups. The operative time in Group A was significantly shorter than that in Group B (P hemorrhoid core retraction were significantly lower in Group B (P  0.05 for all comparisons) was observed. The multipoint umbrella suture showed better clinical outcomes because of its targeted suture according to the extent of hemorrhoid prolapse. Copyright © 2017. Published by Elsevier Ltd.

  17. Volleyball Scoring Systems.

    Science.gov (United States)

    Calhoun, William; Dargahi-Noubary, G. R.; Shi, Yixun

    2002-01-01

    The widespread interest in sports in our culture provides an excellent opportunity to catch students' attention in mathematics and statistics classes. One mathematically interesting aspect of volleyball, which can be used to motivate students, is the scoring system. (MM)

  18. Revisiting van der Waals like behavior of f(R AdS black holes via the two point correlation function

    Directory of Open Access Journals (Sweden)

    Jie-Xiong Mo

    2017-05-01

    Full Text Available Van der Waals like behavior of f(R AdS black holes is revisited via two point correlation function, which is dual to the geodesic length in the bulk. The equation of motion constrained by the boundary condition is solved numerically and both the effect of boundary region size and f(R gravity are probed. Moreover, an analogous specific heat related to δL is introduced. It is shown that the T−δL graphs of f(R AdS black holes exhibit reverse van der Waals like behavior just as the T−S graphs do. Free energy analysis is carried out to determine the first order phase transition temperature T⁎ and the unstable branch in T−δL curve is removed by a bar T=T⁎. It is shown that the first order phase transition temperature is the same at least to the order of 10−10 for different choices of the parameter b although the values of free energy vary with b. Our result further supports the former finding that charged f(R AdS black holes behave much like RN-AdS black holes. We also check the analogous equal area law numerically and find that the relative errors for both the cases θ0=0.1 and θ0=0.2 are small enough. The fitting functions between log⁡|T−Tc| and log⁡|δL−δLc| for both cases are also obtained. It is shown that the slope is around 3, implying that the critical exponent is about 2/3. This result is in accordance with those in former literatures of specific heat related to the thermal entropy or entanglement entropy.

  19. Comparative MR study of hepatic fat quantification using single-voxel proton spectroscopy, two-point dixon and three-point IDEAL.

    Science.gov (United States)

    Kim, Hyeonjin; Taksali, Sara E; Dufour, Sylvie; Befroy, Douglas; Goodman, T Robin; Petersen, Kitt Falk; Shulman, Gerald I; Caprio, Sonia; Constable, R Todd

    2008-03-01

    Hepatic fat fraction (HFF) was measured in 28 lean/obese humans by single-voxel proton spectroscopy (MRS), a two-point Dixon (2PD), and a three-point iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) method (3PI). For the lean, obese, and total subject groups, the range of HFF measured by MRS was 0.3-3.5% (1.1 +/- 1.4%), 0.3-41.5% (11.7 +/- 12.1), and 0.3-41.5% (10.1 +/- 11.6%), respectively. For the same groups, the HFF measured by 2PD was -6.3-2.2% (-2.0 +/- 3.7%), -2.4-42.9% (12.9 +/- 13.8%), and -6.3-42.9% (10.5 +/- 13.7%), respectively, and for 3PI they were 7.9-12.8% (10.1 +/- 2.0%), 11.1-49.3% (22.0 +/- 12.2%), and 7.9-49.3% (20.0 +/- 11.8%), respectively. The HFF measured by MRS was highly correlated with those measured by 2PD (r = 0.954, P fatty liver with the MRI methods ranged from 68-93% for 2PD and 64-89% for 3PI. Our study demonstrates that the apparent HFF measured by the MRI methods can significantly vary depending on the choice of water-fat separation methods and sequences. Such variability may limit the clinical application of the MRI methods, particularly when a diagnosis of early fatty liver needs to be performed. Therefore, protocol-specific establishment of cutoffs for liver fat content may be necessary. (c) 2008 Wiley-Liss, Inc.

  20. A Comparative MR Study of Hepatic Fat Quantification Using Single-voxel Proton Spectroscopy, Two-point Dixon and Three-point IDEAL

    Science.gov (United States)

    Kim, Hyeonjin; Taksali, Sara E.; Dufour, Sylvie; Befroy, Douglas; Goodman, T. Robin; Petersen, Kitt Falk; Shulman, Gerald I.; Caprio, Sonia; Constable, R. Todd

    2009-01-01

    Hepatic fat fraction (HFF) was measured in 28 lean/obese humans by single-voxel proton spectroscopy (MRS), a two-point Dixon (2PD) and a three-point iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) method (3PI). For the lean, obese and total subject groups, the range of HFF measured by MRS was 0.3–3.5% (1.1±1.4%), 0.3–41.5% (11.7±12.1), and 0.3–41.5% (10.1±11.6%), respectively For the same groups, the HFF measured by 2PD was −6.3–2.2% (−2.0±3.7%), −2.4–42.9% (12.9±13.8%), and −6.3–42.9% (10.5±13.7%), respectively, and for 3PI they were 7.9–12.8% (10.1±2.0%), 11.1–49.3% (22.0±12.2%), and 7.9–49.3% (20.0±11.8%), respectively. The HFF measured by MRS was highly correlated with those measured by 2PD (r=0.954, pfatty liver with the MRI methods ranged 75–93% for 2PI and 79–89% for 3PI. Our study demonstrates that the apparent HFF measured by the MRI methods can significantly vary depending on the choice of water-fat separation methods and sequences. Such variability may limit the clinical application of the MRI methods, particularly when a diagnosis of early fatty liver needs to be performed. Therefore, protocol-specific establishment of cutoffs for liver fat content may be necessary. PMID:18306404

  1. Comparison of sensitivity to artificial spectral errors and multivariate LOD in NIR spectroscopy - Determining the performance of miniaturizations on melamine in milk powder.

    Science.gov (United States)

    Henn, Raphael; Kirchler, Christian G; Grossgut, Maria-Elisabeth; Huck, Christian W

    2017-05-01

    This study compared three commercially available spectrometers - whereas two of them were miniaturized - in terms of prediction ability of melamine in milk powder (infant formula). Therefore all spectra were split into calibration- and validation-set using Kennard Stone and Duplex algorithm in comparison. For each instrument the three best performing PLSR models were constructed using SNV and Savitzky Golay derivatives. The best RMSEP values were 0.28g/100g, 0.33g/100g and 0.27g/100g for the NIRFlex N-500, the microPHAZIR and the microNIR2200 respectively. Furthermore the multivariate LOD interval [LOD min , LOD max ] was calculated for all the PLSR models unveiling significant differences among the spectrometers showing values of 0.20g/100g - 0.27g/100g, 0.28g/100g - 0.54g/100g and 0.44g/100g - 1.01g/100g for the NIRFlex N-500, the microPHAZIR and the microNIR2200 respectively. To assess the robustness of all models, artificial introduction of white noise, baseline shift, multiplicative effect, spectral shrink and stretch, stray light and spectral shift were applied. Monitoring the RMSEP as function of the perturbation gave indication of robustness of the models and helped to compare the performances of the spectrometers. Not taking the additional information from the LOD calculations into account one could falsely assume that all the spectrometers perform equally well which is not the case when the multivariate evaluation and robustness data were considered. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Instant MuseScore

    CERN Document Server

    Shinn, Maxwell

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. Instant MuseScore is written in an easy-to follow format, packed with illustrations that will help you get started with this music composition software.This book is for musicians who would like to learn how to notate music digitally with MuseScore. Readers should already have some knowledge about musical terminology; however, no prior experience with music notation software is necessary.

  3. The Bayesian Score Statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.; Kleijn, R.; Paap, R.

    2000-01-01

    We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike

  4. South African Scoring System

    African Journals Online (AJOL)

    2014-11-18

    Nov 18, 2014 ... for 80% (SASS score) and 75% (NOT) of the variation in the regression model. Consequently, SASS ... further investigation: spatial analyses of macroinvertebrate assemblages; and the use of structural and functional metrics. Keywords: .... conductivity levels was assessed using multiple linear regres- sion.

  5. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  6. Credit scoring methods

    Czech Academy of Sciences Publication Activity Database

    Vojtek, Martin; Kočenda, Evžen

    2006-01-01

    Roč. 56, 3-4 (2006), s. 152-167 ISSN 0015-1920 R&D Projects: GA ČR GA402/05/0931 Institutional research plan: CEZ:AV0Z70850503 Keywords : banking sector * credit scoring * discrimination analysis Subject RIV: AH - Economics Impact factor: 0.190, year: 2006 http://journal.fsv.cuni.cz/storage/1050_s_152_167.pdf

  7. Credit scoring for individuals

    Directory of Open Access Journals (Sweden)

    Maria DIMITRIU

    2010-12-01

    Full Text Available Lending money to different borrowers is profitable, but risky. The profits come from the interest rate and the fees earned on the loans. Banks do not want to make loans to borrowers who cannot repay them. Even if the banks do not intend to make bad loans, over time, some of them can become bad. For instance, as a result of the recent financial crisis, the capability of many borrowers to repay their loans were affected, many of them being on default. That’s why is important for the bank to monitor the loans. The purpose of this paper is to focus on credit scoring main issues. As a consequence of this, we presented in this paper the scoring model of an important Romanian Bank. Based on this credit scoring model and taking into account the last lending requirements of the National Bank of Romania, we developed an assessment tool, in Excel, for retail loans which is presented in the case study.

  8. College Math Assessment: SAT Scores vs. College Math Placement Scores

    Science.gov (United States)

    Foley-Peres, Kathleen; Poirier, Dawn

    2008-01-01

    Many colleges and university's use SAT math scores or math placement tests to place students in the appropriate math course. This study compares the use of math placement scores and SAT scores for 188 freshman students. The student's grades and faculty observations were analyzed to determine if the SAT scores and/or college math assessment scores…

  9. Estimating NHL Scoring Rates

    OpenAIRE

    Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research

    2011-01-01

    The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...

  10. The International Bleeding Risk Score

    DEFF Research Database (Denmark)

    Laursen, Stig Borbjerg; Laine, L.; Dalton, H.

    2017-01-01

    The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding.......The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding....

  11. A Prognostic Scoring Tool for Cesarean Organ/Space Surgical Site Infections: Derivation and Internal Validation.

    Science.gov (United States)

    Assawapalanggool, Srisuda; Kasatpibal, Nongyao; Sirichotiyakul, Supatra; Arora, Rajin; Suntornlimsiri, Watcharin

    Organ/space surgical site infections (SSIs) are serious complications after cesarean delivery. However, no scoring tool to predict these complications has yet been developed. This study sought to develop and validate a prognostic scoring tool for cesarean organ/space SSIs. Data for case and non-case of cesarean organ/space SSI between January 1, 2007 and December 31, 2012 from a tertiary care hospital in Thailand were analyzed. Stepwise multivariable logistic regression was used to select the best predictor combination and their coefficients were transformed to a risk scoring tool. The likelihood ratio of positive for each risk category and the area under receiver operating characteristic (AUROC) curves were analyzed on total scores. Internal validation using bootstrap re-sampling was tested for reproducibility. The predictors of 243 organ/space SSIs from 4,988 eligible cesarean delivery cases comprised the presence of foul-smelling amniotic fluid (four points), vaginal examination five or more times before incision (two points), wound class III or greater (two points), being referred from local setting (two points), hemoglobin less than 11 g/dL (one point), and ethnic minorities (one point). The likelihood ratio of cesarean organ/space SSIs with 95% confidence interval among low (total score of 0-1 point), medium (total score of 2-5 points), and high risk (total score of ≥6 points) categories were 0.11 (0.07-0.19), 1.03 (0.89-1.18), and 13.25 (10.87-16.14), respectively. Both AUROCs of the derivation and validation data were comparable (87.57% versus 86.08%; p = 0.418). This scoring tool showed a high predictive ability regarding cesarean organ/space SSIs on the derivation data and reproducibility was demonstrated on internal validation. It could assist practitioners prioritize patient care and management depending on risk category and decrease SSI rates in cesarean deliveries.

  12. Do Test Scores Buy Happiness?

    Science.gov (United States)

    McCluskey, Neal

    2017-01-01

    Since at least the enactment of No Child Left Behind in 2002, standardized test scores have served as the primary measures of public school effectiveness. Yet, such scores fail to measure the ultimate goal of education: maximizing happiness. This exploratory analysis assesses nation level associations between test scores and happiness, controlling…

  13. Predicting occupational personality test scores.

    Science.gov (United States)

    Furnham, A; Drakeley, R

    2000-01-01

    The relationship between students' actual test scores and their self-estimated scores on the Hogan Personality Inventory (HPI; R. Hogan & J. Hogan, 1992), an omnibus personality questionnaire, was examined. Despite being given descriptive statistics and explanations of each of the dimensions measured, the students tended to overestimate their scores; yet all correlations between actual and estimated scores were positive and significant. Correlations between self-estimates and actual test scores were highest for sociability, ambition, and adjustment (r = .62 to r = .67). The results are discussed in terms of employers' use and abuse of personality assessment for job recruitment.

  14. Differences of wells scores accuracy, caprini scores and padua scores in deep vein thrombosis diagnosis

    Science.gov (United States)

    Gatot, D.; Mardia, A. I.

    2018-03-01

    Deep Vein Thrombosis (DVT) is the venous thrombus in lower limbs. Diagnosis is by using venography or ultrasound compression. However, these examinations are not available yet in some health facilities. Therefore many scoring systems are developed for the diagnosis of DVT. The scoring method is practical and safe to use in addition to efficacy, and effectiveness in terms of treatment and costs. The existing scoring systems are wells, caprini and padua score. There have been many studies comparing the accuracy of this score but not in Medan. Therefore, we are interested in comparative research of wells, capriniand padua score in Medan.An observational, analytical, case-control study was conducted to perform diagnostic tests on the wells, caprini and padua score to predict the risk of DVT. The study was at H. Adam Malik Hospital in Medan.From a total of 72 subjects, 39 people (54.2%) are men and the mean age are 53.14 years. Wells score, caprini score and padua score has a sensitivity of 80.6%; 61.1%, 50% respectively; specificity of 80.65; 66.7%; 75% respectively, and accuracy of 87.5%; 64.3%; 65.7% respectively.Wells score has better sensitivity, specificity and accuracy than caprini and padua score in diagnosing DVT.

  15. Testing Local Independence between Two Point Processes

    DEFF Research Database (Denmark)

    Allard, Denis; Brix, Anders; Chadæuf, Joël

    2001-01-01

    Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush......Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush...

  16. [Propensity score matching in SPSS].

    Science.gov (United States)

    Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli

    2015-11-01

    To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.

  17. [Prognostic scores for pulmonary embolism].

    Science.gov (United States)

    Junod, Alain

    2016-03-23

    Nine prognostic scores for pulmonary embolism (PE), based on retrospective and prospective studies, published between 2000 and 2014, have been analyzed and compared. Most of them aim at identifying PE cases with a low risk to validate their ambulatory care. Important differences in the considered outcomes: global mortality, PE-specific mortality, other complications, sizes of low risk groups, exist between these scores. The most popular score appears to be the PESI and its simplified version. Few good quality studies have tested the applicability of these scores to PE outpatient care, although this approach tends to already generalize in the medical practice.

  18. D-score: a search engine independent MD-score.

    Science.gov (United States)

    Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P

    2013-03-01

    While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Trends in Classroom Observation Scores

    Science.gov (United States)

    Casabianca, Jodi M.; Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Observations and ratings of classroom teaching and interactions collected over time are susceptible to trends in both the quality of instruction and rater behavior. These trends have potential implications for inferences about teaching and for study design. We use scores on the Classroom Assessment Scoring System-Secondary (CLASS-S) protocol from…

  20. Quadratic prediction of factor scores

    NARCIS (Netherlands)

    Wansbeek, T

    1999-01-01

    Factor scores are naturally predicted by means of their conditional expectation given the indicators y. Under normality this expectation is linear in y but in general it is an unknown function of y. II is discussed that under nonnormality factor scores can be more precisely predicted by a quadratic

  1. The Machine Scoring of Writing

    Science.gov (United States)

    McCurry, Doug

    2010-01-01

    This article provides an introduction to the kind of computer software that is used to score student writing in some high stakes testing programs, and that is being promoted as a teaching and learning tool to schools. It sketches the state of play with machines for the scoring of writing, and describes how these machines work and what they do.…

  2. Matching score based face recognition

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2006-01-01

    Accurate face registration is of vital importance to the performance of a face recognition algorithm. We propose a new method: matching score based face registration, which searches for optimal alignment by maximizing the matching score output of a classifier as a function of the different

  3. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  4. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  5. 3D Space Shift from CityGML LoD3-Based Multiple Building Elements to a 3D Volumetric Object

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2017-01-01

    Full Text Available In contrast with photorealistic visualizations, urban landscape applications, and building information system (BIM, 3D volumetric presentations highlight specific calculations and applications of 3D building elements for 3D city planning and 3D cadastres. Knowing the precise volumetric quantities and the 3D boundary locations of 3D building spaces is a vital index which must remain constant during data processing because the values are related to space occupation, tenure, taxes, and valuation. To meet these requirements, this paper presents a five-step algorithm for performing a 3D building space shift. This algorithm is used to convert multiple building elements into a single 3D volumetric building object while maintaining the precise volume of the 3D space and without changing the 3D locations or displacing the building boundaries. As examples, this study used input data and building elements based on City Geography Markup Language (CityGML LoD3 models. This paper presents a method for 3D urban space and 3D property management with the goal of constructing a 3D volumetric object for an integral building using CityGML objects, by fusing the geometries of various building elements. The resulting objects possess true 3D geometry that can be represented by solid geometry and saved to a CityGML file for effective use in 3D urban planning and 3D cadastres.

  6. Re-Scoring the Game’s Score

    DEFF Research Database (Denmark)

    Gasselseder, Hans-Peter

    2014-01-01

    This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self-report questionnai......This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self......-temporal alignment in the resulting emotional congruency of nondiegetic music. Whereas imaginary aspects of immersive presence are systemically affected by the presentation of dynamic music, sensory spatial aspects show higher sensitivity towards the arousal potential of the music score. It is argued...

  7. Analyse av LOD-tiltak

    OpenAIRE

    Kunduraci, Meltem

    2016-01-01

    Endrede klimatiske forhold og større urbanisering medfører økte oversvømmelsesskader i urbane områder. Ekstreme nedbørhendelser opptrer oftere og kraftigere. Utbygging med tette flater hindrer infiltrasjon til grunnen. Den naturlige utjevningen av overvann reduseres. Dette resulterer i økende belastninger på det eksisterende avløpssystemet. Kapasiteten på avløpsnettet er mange steder overbelastet og er ikke i stand til å håndtere overvannsmengder under styrtregn. Lokal overvannsdisponering el...

  8. Two-point versus multiple-point geostatistics: the ability of geostatistical methods to capture complex geobodies and their facies associations—an application to a channelized carbonate reservoir, southwest Iran

    International Nuclear Information System (INIS)

    Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein

    2014-01-01

    Facies models try to explain facies architectures which have a primary control on the subsurface heterogeneities and the fluid flow characteristics of a given reservoir. In the process of facies modeling, geostatistical methods are implemented to integrate different sources of data into a consistent model. The facies models should describe facies interactions; the shape and geometry of the geobodies as they occur in reality. Two distinct categories of geostatistical techniques are two-point and multiple-point (geo) statistics (MPS). In this study, both of the aforementioned categories were applied to generate facies models. A sequential indicator simulation (SIS) and a truncated Gaussian simulation (TGS) represented two-point geostatistical methods, and a single normal equation simulation (SNESIM) selected as an MPS simulation representative. The dataset from an extremely channelized carbonate reservoir located in southwest Iran was applied to these algorithms to analyze their performance in reproducing complex curvilinear geobodies. The SNESIM algorithm needs consistent training images (TI) in which all possible facies architectures that are present in the area are included. The TI model was founded on the data acquired from modern occurrences. These analogies delivered vital information about the possible channel geometries and facies classes that are typically present in those similar environments. The MPS results were conditioned to both soft and hard data. Soft facies probabilities were acquired from a neural network workflow. In this workflow, seismic-derived attributes were implemented as the input data. Furthermore, MPS realizations were conditioned to hard data to guarantee the exact positioning and continuity of the channel bodies. A geobody extraction workflow was implemented to extract the most certain parts of the channel bodies from the seismic data. These extracted parts of the channel bodies were applied to the simulation workflow as hard data

  9. Skin scoring in systemic sclerosis

    DEFF Research Database (Denmark)

    Zachariae, Hugh; Bjerring, Peter; Halkier-Sørensen, Lars

    1994-01-01

    Forty-one patients with systemic sclerosis were investigated with a new and simple skin score method measuring the degree of thickening and pliability in seven regions together with area involvement in each region. The highest values were, as expected, found in diffuse cutaneous systemic sclerosis...... (type III SS) and the lowest in limited cutaneous systemic sclerosis (type I SS) with no lesions extending above wrists and ancles. A positive correlation was found to the aminoterminal propeptide of type III procollagen, a serological marker for synthesis of type III collagen. The skin score...

  10. Comparison of 3D two-point Dixon and standard 2D dual-echo breath-hold sequences for detection and quantification of fat content in renal angiomyolipoma

    International Nuclear Information System (INIS)

    Rosenkrantz, Andrew B.; Raj, Sean; Babb, James S.; Chandarana, Hersh

    2012-01-01

    Purpose: To assess the utility of a 3D two-point Dixon sequence with water–fat decomposition for quantification of fat content of renal angiomyolipoma (AML). Methods: 84 patients underwent renal MRI including 2D in-and-opposed-phase (IP and OP) sequence and 3D two-point Dixon sequence that generates four image sets [IP, OP, water-only (WO), and fat-only (FO)] within one breath-hold. Two radiologists reviewed 2D and 3D images during separate sessions to identify fat-containing renal masses measuring at least 1 cm. For identified lesions subsequently confirmed to represent AML, ROIs were placed at matching locations on 2D and 3D images and used to calculate 2D and 3D SI index [(SI IP − SI OP )/SI IP ] and 3D fat fraction (FF) [SI FO /(SI FO + SI WO )]. 2D and 3D SI index were compared with 3D FF using Pearson correlation coefficients. Results: 41 AMLs were identified in 6 patients. While all were identified using the 3D sequence, 39 were identified using the 2D sequence, with the remaining 2 AMLs retrospectively visible on 2D images but measuring under 1 cm. Among 32 AMLs with a 3D FF of over 50%, both 2D and 3D SI index showed a statistically significant inverse correlation with 3D FF (2D SI index : r = −0.63, p = 0.0010; 3D SI index : r = −0.97, p index , is not limited by ambiguity of water or fat dominance. This may assist clinical management of AML given evidence that fat content predicts embolization response.

  11. Predicting Retear after Repair of Full-Thickness Rotator Cuff Tear: Two-Point Dixon MR Imaging Quantification of Fatty Muscle Degeneration-Initial Experience with 1-year Follow-up.

    Science.gov (United States)

    Nozaki, Taiki; Tasaki, Atsushi; Horiuchi, Saya; Ochi, Junko; Starkey, Jay; Hara, Takeshi; Saida, Yukihisa; Yoshioka, Hiroshi

    2016-08-01

    Purpose To determine the degree of preoperative fatty degeneration within muscles, postoperative longitudinal changes in fatty degeneration, and differences in fatty degeneration between patients with full-thickness supraspinatus tears who do and those who do not experience a retear after surgery. Materials and Methods This prospective study had institutional review board approval and was conducted in accordance with the Committee for Human Research. Informed consent was obtained. Fifty patients with full-thickness supraspinatus tears (18 men, 32 women; mean age, 67.0 years ± 8.0; age range, 41-91 years) were recruited. The degrees of preoperative and postoperative fatty degeneration were quantified by using a two-point Dixon magnetic resonance (MR) imaging sequence; two radiologists measured the mean signal intensity on in-phase [S(In)] and fat [S(Fat)] images. Estimates of fatty degeneration were calculated with "fat fraction" values by using the formula S(Fat)/S(In) within the supraspinatus, infraspinatus, and subscapularis muscles at baseline preoperative and at postoperative 1-year follow-up MR imaging. Preoperative fat fractions in the failed-repair group and the intact-repair group were compared by using the Mann-Whitney U test. Results The preoperative fat fractions in the supraspinatus muscle were significantly higher in the failed-repair group than in the intact-repair group (37.0% vs 19.5%, P muscle tended to progress at 1 year postoperatively in only the failed-repair group. Conclusion MR imaging quantification of preoperative fat fractions by using a two-point Dixon sequence within the rotator cuff muscles may be a viable method for predicting postoperative retear. (©) RSNA, 2016.

  12. The persistence of depression score

    NARCIS (Netherlands)

    Spijker, J.; de Graaf, R.; Ormel, J.; Nolen, W. A.; Grobbee, D. E.; Burger, H.

    2006-01-01

    Objective: To construct a score that allows prediction of major depressive episode (MDE) persistence in individuals with MDE using determinants of persistence identified in previous research. Method: Data were derived from 250 subjects from the general population with new MDE according to DSM-III-R.

  13. Score distributions in information retrieval

    NARCIS (Netherlands)

    Arampatzis, A.; Robertson, S.; Kamps, J.

    2009-01-01

    We review the history of modeling score distributions, focusing on the mixture of normal-exponential by investigating the theoretical as well as the empirical evidence supporting its use. We discuss previously suggested conditions which valid binary mixture models should satisfy, such as the

  14. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  15. Validity of the Water Hammer Formula for Determining Regional Aortic Pulse Wave Velocity: Comparison of One-Point and Two-Point (Foot-to-Foot) Measurements Using a Multisensor Catheter in Human.

    Science.gov (United States)

    Hanya, Shizuo

    2013-01-01

    Lack of high-fidelity simultaneous measurements of pressure and flow velocity in the aorta has impeded the direct validation of the water-hammer formula for estimating regional aortic pulse wave velocity (AO-PWV1) and has restricted the study of the change of beat-to-beat AO-PWV1 under varying physiological conditions in man. Aortic pulse wave velocity was derived using two methods in 15 normotensive subjects: 1) the conventional two-point (foot-to-foot) method (AO-PWV2) and 2) a one-point method (AO-PWV1) in which the pressure velocity-loop (PV-loop) was analyzed based on the water hammer formula using simultaneous measurements of flow velocity (Vm) and pressure (Pm) at the same site in the proximal aorta using a multisensor catheter. AO-PWV1 was calculated from the slope of the linear regression line between Pm and Vm where wave reflection (Pb) was at a minimum in early systole in the PV-loop using the water hammer formula, PWV1 = (Pm/Vm)/ρ, where ρ is the blood density. AO-PWV2 was calculated using the conventional two-point measurement method as the distance/traveling time of the wave between 2 sites for measuring P in the proximal aorta. Beat-to-beat alterations of AO-PWV1 in relationship to aortic pressure and linearity of the initial part of the PV-loop during a Valsalva maneuver were also assessed in one subject. The initial part of the loop became steeper in association with the beat-to-beat increase in diastolic pressure in phase 4 during the Valsalva maneuver. The linearity of the initial part of the PV-loop was maintained consistently during the maneuver. Flow velocity vs. pressure in the proximal aorta was highly linear during early systole, with Pearson's coefficients ranging from 0.9954 to 0.9998. The average values of AO-PWV1 and AO-PWV2 were 6.3 ± 1.2 and 6.7 ± 1.3 m/s, respectively. The regression line of AO-PWV1 on AO-PWV2 was y = 0.95x + 0.68 (r = 0.93, p <0.001). This study concluded that the water-hammer formula (one-point method) provides

  16. Combining Teacher Assessment Scores with External Examination ...

    African Journals Online (AJOL)

    Combining Teacher Assessment Scores with External Examination Scores for Certification: Comparative Study of Four Statistical Models. ... University entrance examination scores in mathematics were obtained for a subsample of 115 ...

  17. Scoring System Improvements to Three Leadership Predictors

    National Research Council Canada - National Science Library

    Dela

    1997-01-01

    .... The modified scoring systems were evaluated by rescoring responses randomly selected from the sample which had been scored according to the scoring systems originally developed for the leadership research...

  18. Interpreting force concept inventory scores: Normalized gain and SAT scores

    Directory of Open Access Journals (Sweden)

    Jeffrey J. Steinert

    2007-05-01

    Full Text Available Preinstruction SAT scores and normalized gains (G on the force concept inventory (FCI were examined for individual students in interactive engagement (IE courses in introductory mechanics at one high school (N=335 and one university (N=292 , and strong, positive correlations were found for both populations ( r=0.57 and r=0.46 , respectively. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson’s Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students’ cognitive skills, as measured either by the SAT or by Lawson’s test. While Lawson’s test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students’ reasoning abilities. Knowing the students’ cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students’ cognitive level.

  19. Interpreting force concept inventory scores: Normalized gain and SAT scores

    Directory of Open Access Journals (Sweden)

    Vincent P. Coletta

    2007-05-01

    Full Text Available Preinstruction SAT scores and normalized gains (G on the force concept inventory (FCI were examined for individual students in interactive engagement (IE courses in introductory mechanics at one high school (N=335 and one university (N=292, and strong, positive correlations were found for both populations (r=0.57 and r=0.46, respectively. These correlations are likely due to the importance of cognitive skills and abstract reasoning in learning physics. The larger correlation coefficient for the high school population may be a result of the much shorter time interval between taking the SAT and studying mechanics, because the SAT may provide a more current measure of abilities when high school students begin the study of mechanics than it does for college students, who begin mechanics years after the test is taken. In prior research a strong correlation between FCI G and scores on Lawson’s Classroom Test of Scientific Reasoning for students from the same two schools was observed. Our results suggest that, when interpreting class average normalized FCI gains and comparing different classes, it is important to take into account the variation of students’ cognitive skills, as measured either by the SAT or by Lawson’s test. While Lawson’s test is not commonly given to students in most introductory mechanics courses, SAT scores provide a readily available alternative means of taking account of students’ reasoning abilities. Knowing the students’ cognitive level before instruction also allows one to alter instruction or to use an intervention designed to improve students’ cognitive level.

  20. Blind Grid Scoring Record No. 290

    National Research Council Canada - National Science Library

    Overbay, Larry; Robitaille, George

    2005-01-01

    ...) utilizing the APG Standardized UXO Technology Demonstration Site Blind Grid. Scoring Records have been coordinated by Larry Overbay and the Standardized UXO Technology Demonstration Site Scoring Committee...

  1. Blind Grid Scoring Record No. 293

    National Research Council Canada - National Science Library

    Overbay, Larry; Robitaille, George; Archiable, Robert; Fling, Rick; McClung, Christina

    2005-01-01

    ...) utilizing the YPG Standardized UXO Technology Demonstration Site Blind Grid. Scoring Records have been coordinated by Larry Overbay and the Standardized UXO Technology Demonstration Site Scoring Committee...

  2. Open Field Scoring Record No. 298

    National Research Council Canada - National Science Library

    Overbay, Jr., Larry; Robitaille, George; Fling, Rick; McClung, Christina

    2005-01-01

    ...) utilizing the APG Standardized UXO Technology Demonstration Site Open Field. Scoring Records have been coordinated by Larry Overbay and the Standardized UXO Technology Demonstration Site Scoring Committee...

  3. Open Field Scoring Record No. 299

    National Research Council Canada - National Science Library

    Overbay, Larry; Robitaille, George

    2005-01-01

    ...) utilizing the YPG Standardized UXO Technology Demonstration Site Open Field. Scoring Records have been coordinated by Larry Overbay and the standardized UXO Technology Demonstration Site Scoring Committee...

  4. LOD-climate Links: how the 2015-2016 El Niño Lengthened the Day by 0.8 ms, and Possible Rotational Forcing of Multidecadal Temperature Changes

    Science.gov (United States)

    Lambert, S. B.; de Viron, O.; Marcus, S.

    2016-12-01

    El Niño events are generally accompanied by significant changes in the Earth's length-of-day (LOD) that can be explained by two approaches. Considering the angular momentum conservation of the system composed by the solid Earth and the atmosphere, ENSO events are accompanied by a strengthening of the subtropical jet streams, and, therefore, a decrease of the Earth's rotation rate. Using the torque approach, the low pressure field of the Eastern Pacific, which is close to high mountain ranges along the Western American coasts, creates a negative torque of the atmosphere on the solid Earth which tends to slow down the Earth's rotation. The large 1983 event was associated with a lengthening of the day of about 1 ms. During the 2015-2016 winter season, a major ENSO event occurred, classified as very strong by meteorological agencies. This central Pacific event, for which the Nino 3.4 index is as high as in 1983, was also concurrent with positive phases of PDO, NAO, and AAO. It coincided with an excursion of the LOD as large as 0.8 ms over a few weeks reaching its maximum around 2016 New Year. We evaluate the mountain and friction torques responsible for the Earth's rotation variations during the winter season and compare to the mean situations and to previous strong ENSO events of 1983 and 1998. Especially, we noticed that the contribution from American mountain ranges is close to the value of 1983. The weaker LOD excursion comes from an inexistent torque over the Himalayas, a weaker contribution from Europe, and a noticeable positive contribution from Antarctica. On longer time scales, core-generated ms-scale LOD excursions are found to precede NH surface and global SST fluctuations by nearly a decade; although the cause of this apparent rotational effect is not known, reported correlations of LOD and tidal-orbital forcing with surface and submarine volcanic activity offer prospects to explain these observations in a core-to-climate chain of causality.

  5. Interval Coded Scoring: a toolbox for interpretable scoring systems

    Directory of Open Access Journals (Sweden)

    Lieven Billiet

    2018-04-01

    Full Text Available Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data. Interval Coded Scoring (ICS connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a binary classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code. ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge. Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains. The validity and applicability

  6. Exploring a Source of Uneven Score Equity across the Test Score Range

    Science.gov (United States)

    Huggins-Manley, Anne Corinne; Qiu, Yuxi; Penfield, Randall D.

    2018-01-01

    Score equity assessment (SEA) refers to an examination of population invariance of equating across two or more subpopulations of test examinees. Previous SEA studies have shown that score equity may be present for examinees scoring at particular test score ranges but absent for examinees scoring at other score ranges. No studies to date have…

  7. Comparison of T1-weighted 2D TSE, 3D SPGR, and two-point 3D Dixon MRI for automated segmentation of visceral adipose tissue at 3 Tesla.

    Science.gov (United States)

    Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin

    2017-04-01

    To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m 2 (30.02 ± 6.63 kg/m 2 ) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.

  8. Pain flare following external beam radiotherapy and meaningful change in pain scores in the treatment of bone metastases

    International Nuclear Information System (INIS)

    Chow, Edward; Ling, Alison; Davis, Lori; Panzarella, Tony; Danjoux, Cyril

    2005-01-01

    Background and purpose: To examine the incidence of pain flare following external beam radiotherapy and to determine what constitutes a meaningful change in pain scores in the treatment of bone metastases. Patients and methods: Patients with bone metastases treated with external beam radiotherapy were asked to score their pain on a scale of 0-10 before the treatment (baseline), daily during the treatment and for 10 days after completion of external beam radiation. Pain flare was defined as a two-point increase from baseline pain in the pain scale of 0-10 with no decrease in analgesic intake or a 25% increase in analgesic intake employing daily oral morphine equivalent with no decrease in pain score. To distinguish pain flare from progression of pain, we required the pain score and analgesic intake to return back to baseline levels after the increase/flare. They were also asked to indicate if their pain changed during that time compared to pre-treatment level. The change in pain score was compared with patient perception. Results: Eighty-eight patients were evaluated in this study. There were 49 male and 39 female patients with the median age of 70 years. Twelve of 88 patients (14%) had pain flare on day 1. The overall incidence of pain flare during the study period ranged from 2 to 16%. A total of 797 pain scorings were obtained. Patients perceived an improvement in pain when their self-reported pain score decreased by at least two points. Conclusions: Our study confirms the occurrence of pain flare following the external beam radiotherapy in the treatment of bone metastases. Further studies are required to predict who are at risk for flare. Appropriate measures can be taken to alleviate the pain flare. The finding in the meaningful change in pain scores supports the investigator-defined partial response used in some clinical trials

  9. Linkage between company scores and stock returns

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2017-12-01

    Full Text Available Previous studies on company scores conducted at firm-level, generally concluded that there exists a positive relation between company scores and stock returns. Motivated by these studies, this study examines the relationship between company scores (Corporate Governance Score, Economic Score, Environmental Score, and Social Score and stock returns, both at portfolio-level analysis and firm-level cross-sectional regressions. In portfolio-level analysis, stocks are sorted based on each company scores and quintile portfolio are formed with different levels of company scores. Then, existence and significance of raw returns and risk-adjusted returns difference between portfolios with the extreme company scores (portfolio 10 and portfolio 1 is tested. In addition, firm-level cross-sectional regression is performed to examine the significance of company scores effects with control variables. While portfolio-level analysis results indicate that there is no significant relation between company scores and stock returns; firm-level analysis indicates that economic, environmental, and social scores have effect on stock returns, however, significance and direction of these effects change, depending on the included control variables in the cross-sectional regression.

  10. Cardiovascular risk scores for coronary atherosclerosis.

    Science.gov (United States)

    Yalcin, Murat; Kardesoglu, Ejder; Aparci, Mustafa; Isilak, Zafer; Uz, Omer; Yiginer, Omer; Ozmen, Namik; Cingozbay, Bekir Yilmaz; Uzun, Mehmet; Cebeci, Bekir Sitki

    2012-10-01

    The objective of this study was to compare frequently used cardiovascular risk scores in predicting the presence of coronary artery disease (CAD) and 3-vessel disease. In 350 consecutive patients (218 men and 132 women) who underwent coronary angiography, the cardiovascular risk level was determined using the Framingham Risk Score (FRS), the Modified Framingham Risk Score (MFRS), the Prospective Cardiovascular Münster (PROCAM) score, and the Systematic Coronary Risk Evaluation (SCORE). The area under the curve for receiver operating characteristic curves showed that FRS had more predictive value than the other scores for CAD (area under curve, 0.76, P MFRS, PROCAM, and SCORE) may predict the presence and severity of coronary atherosclerosis.The FRS had better predictive value than the other scores.

  11. Interobserver variability of the neurological optimality score

    NARCIS (Netherlands)

    Monincx, W. M.; Smolders-de Haas, H.; Bonsel, G. J.; Zondervan, H. A.

    1999-01-01

    To assess the interobserver reliability of the neurological optimality score. The neurological optimality score of 21 full term healthy, neurologically normal newborn infants was determined by two well trained observers. The interclass correlation coefficient was 0.31. Kappa for optimality (score of

  12. Semiparametric score level fusion: Gaussian copula approach

    NARCIS (Netherlands)

    Susyanyo, N.; Klaassen, C.A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    Score level fusion is an appealing method for combining multi-algorithms, multi- representations, and multi-modality biometrics due to its simplicity. Often, scores are assumed to be independent, but even for dependent scores, accord- ing to the Neyman-Pearson lemma, the likelihood ratio is the

  13. An Objective Fluctuation Score for Parkinson's Disease

    Science.gov (United States)

    Horne, Malcolm K.; McGregor, Sarah; Bergquist, Filip

    2015-01-01

    Introduction Establishing the presence and severity of fluctuations is important in managing Parkinson’s Disease yet there is no reliable, objective means of doing this. In this study we have evaluated a Fluctuation Score derived from variations in dyskinesia and bradykinesia scores produced by an accelerometry based system. Methods The Fluctuation Score was produced by summing the interquartile range of bradykinesia scores and dyskinesia scores produced every 2 minutes between 0900-1800 for at least 6 days by the accelerometry based system and expressing it as an algorithm. Results This Score could distinguish between fluctuating and non-fluctuating patients with high sensitivity and selectivity and was significant lower following activation of deep brain stimulators. The scores following deep brain stimulation lay in a band just above the score separating fluctuators from non-fluctuators, suggesting a range representing adequate motor control. When compared with control subjects the score of newly diagnosed patients show a loss of fluctuation with onset of PD. The score was calculated in subjects whose duration of disease was known and this showed that newly diagnosed patients soon develop higher scores which either fall under or within the range representing adequate motor control or instead go on to develop more severe fluctuations. Conclusion The Fluctuation Score described here promises to be a useful tool for identifying patients whose fluctuations are progressing and may require therapeutic changes. It also shows promise as a useful research tool. Further studies are required to more accurately identify therapeutic targets and ranges. PMID:25928634

  14. Breaking of scored tablets : a review

    NARCIS (Netherlands)

    van Santen, E; Barends, D M; Frijlink, H W

    The literature was reviewed regarding advantages, problems and performance indicators of score lines. Scored tablets provide dose flexibility, ease of swallowing and may reduce the costs of medication. However, many patients are confronted with scored tablets that are broken unequally and with

  15. Validation of Automated Scoring of Science Assessments

    Science.gov (United States)

    Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C.

    2016-01-01

    Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…

  16. Dāvida Lodža „Mazā pasaule” un „Paradīzes jaunumi” kā piedzīvojumu romāni

    OpenAIRE

    Kaušakīte, Inessa

    2007-01-01

    Maģistra darbā „Deivida Lodža „Mazā pasaule” un „Paradīzes Jaunumi” kā piedzīvojumu romāni” mērķis ir pievērst uzmanību piedzīvojuma žanra dažādām īpatnībām mūsdienu angļu romānista darbos. Darbs arī atklāj postmodernā piedzīvojumu romāna varoņa rakstura īpatnības. Pirmā nodaļa izceļ klasiskā 16. gadsimta spāņu literatūras piedzīvojumu romāna svarīgākās īpatnības. Šī nodaļa pēta apstākļus, kas palīdzējuši žanra attīstībai Anglijā. Otrā nodaļa pievērš uzmanību Deivida Lodža liktenim, kas kā...

  17. Assessment of average of normals (AON) procedure for outlier-free datasets including qualitative values below limit of detection (LoD): an application within tumor markers such as CA 15-3, CA 125, and CA 19-9.

    Science.gov (United States)

    Usta, Murat; Aral, Hale; Mete Çilingirtürk, Ahmet; Kural, Alev; Topaç, Ibrahim; Semerci, Tuna; Hicri Köseoğlu, Mehmet

    2016-11-01

    Average of normals (AON) is a quality control procedure that is sensitive only to systematic errors that can occur in an analytical process in which patient test results are used. The aim of this study was to develop an alternative model in order to apply the AON quality control procedure to datasets that include qualitative values below limit of detection (LoD). The reported patient test results for tumor markers, such as CA 15-3, CA 125, and CA 19-9, analyzed by two instruments, were retrieved from the information system over a period of 5 months, using the calibrator and control materials with the same lot numbers. The median as a measure of central tendency and the median absolute deviation (MAD) as a measure of dispersion were used for the complementary model of AON quality control procedure. The u bias values, which were determined for the bias component of the measurement uncertainty, were partially linked to the percentages of the daily median values of the test results that fall within the control limits. The results for these tumor markers, in which lower limits of reference intervals are not medically important for clinical diagnosis and management, showed that the AON quality control procedure, using the MAD around the median, can be applied for datasets including qualitative values below LoD.

  18. Oswestry Disability Index scoring made easy.

    Science.gov (United States)

    Mehra, A; Baker, D; Disney, S; Pynsent, P B

    2008-09-01

    Low back pain effects up to 80% of the population at some time during their active life. Questionnaires are available to help measure pain and disability. The Oswestry Disability Index (ODI) is the most commonly used outcome measure for low back pain. The aim of this study was to see if training in completing the ODI forms improved the scoring accuracy. The last 100 ODI forms completed in a hospital's spinal clinic were reviewed retrospectively and errors in the scoring were identified. Staff members involved in scoring the questionnaire were made aware of the errors and the correct method of scoring explained. A chart was created with all possible scores to aid the staff with scoring. A prospective audit on 50 questionnaires was subsequently performed. The retrospective study showed that 33 of the 100 forms had been incorrectly scored. All questionnaires where one or more sections were not completed by the patient were incorrectly scored. A scoring chart was developed and staff training was implemented. This reduced the error rate to 14% in the prospective audit. Clinicians applying outcome measures should read the appropriate literature to ensure they understand the scoring system. Staff must then be given adequate training in the application of the questionnaires.

  19. Combination of scoring schemes for protein docking

    Directory of Open Access Journals (Sweden)

    Schomburg Dietmar

    2007-08-01

    Full Text Available Abstract Background Docking algorithms are developed to predict in which orientation two proteins are likely to bind under natural conditions. The currently used methods usually consist of a sampling step followed by a scoring step. We developed a weighted geometric correlation based on optimised atom specific weighting factors and combined them with our previously published amino acid specific scoring and with a comprehensive SVM-based scoring function. Results The scoring with the atom specific weighting factors yields better results than the amino acid specific scoring. In combination with SVM-based scoring functions the percentage of complexes for which a near native structure can be predicted within the top 100 ranks increased from 14% with the geometric scoring to 54% with the combination of all scoring functions. Especially for the enzyme-inhibitor complexes the results of the ranking are excellent. For half of these complexes a near-native structure can be predicted within the first 10 proposed structures and for more than 86% of all enzyme-inhibitor complexes within the first 50 predicted structures. Conclusion We were able to develop a combination of different scoring schemes which considers a series of previously described and some new scoring criteria yielding a remarkable improvement of prediction quality.

  20. Forecasting the value of credit scoring

    Science.gov (United States)

    Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd

    2017-08-01

    Nowadays, credit scoring system plays an important role in banking sector. This process is important in assessing the creditworthiness of customers requesting credit from banks or other financial institutions. Usually, the credit scoring is used when customers send the application for credit facilities. Based on the score from credit scoring, bank will be able to segregate the "good" clients from "bad" clients. However, in most cases the score is useful at that specific time only and cannot be used to forecast the credit worthiness of the same applicant after that. Hence, bank will not know if "good" clients will always be good all the time or "bad" clients may become "good" clients after certain time. To fill up the gap, this study proposes an equation to forecast the credit scoring of the potential borrowers at a certain time by using the historical score related to the assumption. The Mean Absolute Percentage Error (MAPE) is used to measure the accuracy of the forecast scoring. Result shows the forecast scoring is highly accurate as compared to actual credit scoring.

  1. Development of the siriraj clinical asthma score.

    Science.gov (United States)

    Vichyanond, Pakit; Veskitkul, Jittima; Rienmanee, Nuanphong; Pacharn, Punchama; Jirapongsananuruk, Orathai; Visitsunthorn, Nualanong

    2013-09-01

    Acute asthmatic attack in children commonly occurs despite the introduction of effective controllers such as inhaled corticosteroids and leukotriene modifiers. Treatment of acute asthmatic attack requires proper evaluation of attack severity and appropriate selection of medical therapy. In children, measurement of lung function is difficult during acute attack and thus clinical asthma scoring may aid physician in making further decision regarding treatment and admission. We enrolled 70 children with acute asthmatic attack with age range from 1 to 12 years (mean ± SD = 51.5 ± 31.8 months) into the study. Twelve selected asthma severity items were assessed by 2 independent observers prior to administration of salbutamol nebulization (up to 3 doses at 20 minutes interval). Decision for further therapy and admission was made by emergency department physician. Three different scoring systems were constructed from items with best validity. Sensitivity, specificity and accuracy of these scores were assessed. Inter-rater reliability was assessed for each score. Review of previous scoring systems was also conducted and reported. Three severity items had poor validity, i.e., cyanosis, depressed cerebral function, and I:E ratio (p > 0.05). Three items had poor inter-rater reliability, i.e., breath sound quality, air entry, and I:E ratio. These items were omitted and three new clinical scores were constructed from the remaining items. Clinical scoring system comprised retractions, dyspnea, O2 saturation, respiratory rate and wheezing (rangeof score 0-10) gave the best accuracy and inter-rater variability and were chosen for clinical use-Siriraj Clinical Asthma Score (SCAS). A Clinical Asthma Score that is simple, relatively easy to administer and with good validity and variability is essential for treatment of acute asthma in children. Several good candidate scores have been introduced in the past. We described the development of the Siriraj Clinical Asthma Score (SCAS) in

  2. A diagnostic scoring system for myxedema coma.

    Science.gov (United States)

    Popoveniuc, Geanina; Chandra, Tanu; Sud, Anchal; Sharma, Meeta; Blackman, Marc R; Burman, Kenneth D; Mete, Mihriye; Desale, Sameer; Wartofsky, Leonard

    2014-08-01

    To develop diagnostic criteria for myxedema coma (MC), a decompensated state of extreme hypothyroidism with a high mortality rate if untreated, in order to facilitate its early recognition and treatment. The frequencies of characteristics associated with MC were assessed retrospectively in patients from our institutions in order to derive a semiquantitative diagnostic point scale that was further applied on selected patients whose data were retrieved from the literature. Logistic regression analysis was used to test the predictive power of the score. Receiver operating characteristic (ROC) curve analysis was performed to test the discriminative power of the score. Of the 21 patients examined, 7 were reclassified as not having MC (non-MC), and they were used as controls. The scoring system included a composite of alterations of thermoregulatory, central nervous, cardiovascular, gastrointestinal, and metabolic systems, and presence or absence of a precipitating event. All 14 of our MC patients had a score of ≥60, whereas 6 of 7 non-MC patients had scores of 25 to 50. A total of 16 of 22 MC patients whose data were retrieved from the literature had a score ≥60, and 6 of 22 of these patients scored between 45 and 55. The odds ratio per each score unit increase as a continuum was 1.09 (95% confidence interval [CI], 1.01 to 1.16; P = .019); a score of 60 identified coma, with an odds ratio of 1.22. The area under the ROC curve was 0.88 (95% CI, 0.65 to 1.00), and the score of 60 had 100% sensitivity and 85.71% specificity. A score ≥60 in the proposed scoring system is potentially diagnostic for MC, whereas scores between 45 and 59 could classify patients at risk for MC.

  3. A Comparison of Two Scoring Methods for an Automated Speech Scoring System

    Science.gov (United States)

    Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David

    2012-01-01

    This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…

  4. [The diagnostic scores for deep venous thrombosis].

    Science.gov (United States)

    Junod, A

    2015-08-26

    Seven diagnostic scores for the deep venous thrombosis (DVT) of lower limbs are analyzed and compared. Two features make this exer- cise difficult: the problem of distal DVT and of their proximal extension and the status of patients, whether out- or in-patients. The most popular score is the Wells score (1997), modi- fied in 2003. It includes one subjective ele- ment based on clinical judgment. The Primary Care score 12005), less known, has similar pro- perties, but uses only objective data. The pre- sent trend is to associate clinical scores with the dosage of D-Dimers to rule out with a good sensitivity the probability of TVP. For the upper limb DVT, the Constans score (2008) is available, which can also be coupled with D-Dimers testing (Kleinjan).

  5. Scoring an Abstract Contemporary Silent Film

    OpenAIRE

    Frost, Crystal

    2014-01-01

    I composed an original digital audio film score with full sound design for a contemporary silent film called Apple Tree. The film is highly conceptual and interpretive and required a very involved, intricate score to successfully tell the story. In the process of scoring this film, I learned new ways to convey an array of contrasting emotions through music and sound. After analyzing the film's emotional journey, I determined that six defining emotions were the foundation on which to build an ...

  6. The FAt Spondyloarthritis Spine Score (FASSS)

    DEFF Research Database (Denmark)

    Pedersen, Susanne Juhl; Zhao, Zheng; Lambert, Robert Gw

    2013-01-01

    an important measure of treatment efficacy as well as a surrogate marker for new bone formation. The aim of this study was to develop and validate a new scoring method for fat lesions in the spine, the Fat SpA Spine Score (FASSS), which in contrast to the existing scoring method addresses the localization......Studies have shown that fat lesions follow resolution of inflammation in the spine of patients with axial spondyloarthritis (SpA). Fat lesions at vertebral corners have also been shown to predict development of new syndesmophytes. Therefore, scoring of fat lesions in the spine may constitute both...

  7. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  8. Equating error in observed-score equating

    NARCIS (Netherlands)

    van der Linden, Willem J.

    2006-01-01

    Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and population of test takers. But it is argued that if the goal of equating is to adjust the scores of

  9. Correlating continuous assessment scores to junior secondary ...

    African Journals Online (AJOL)

    This study investigated the relationship between continuous assessment scores and junior secondary school certificate examination(JSCE) final scores in Imo State. A sample of four hundred students were purposively selected from thirty eight thousand students who took the 1997 JSCE in Imo State. The data used were ...

  10. Summary of Score Changes (in other Tests).

    Science.gov (United States)

    Cleary, T. Anne; McCandless, Sam A.

    Scholastic Aptitude Test (SAT) scores have declined during the last 14 years. Similar score declines have been observed in many different testing programs, many groups, and tested areas. The declines, while not large in any given year, have been consistent over time, area, and group. The period around 1965 is critical for the interpretation of…

  11. More Issues in Observed-Score Equating

    Science.gov (United States)

    van der Linden, Wim J.

    2013-01-01

    This article is a response to the commentaries on the position paper on observed-score equating by van der Linden (this issue). The response focuses on the more general issues in these commentaries, such as the nature of the observed scores that are equated, the importance of test-theory assumptions in equating, the necessity to use multiple…

  12. Clinical scoring scales in thyroidology: A compendium

    Directory of Open Access Journals (Sweden)

    Sanjay Kalra

    2011-01-01

    Full Text Available This compendium brings together traditional as well as contemporary scoring and grading systems used for the screening and diagnosis of various thyroid diseases, dysfunctions, and complications. The article discusses scores used to help diagnose hypo-and hyperthyroidism, to grade and manage goiter and ophthalmopathy, and to assess the risk of thyroid malignancy.

  13. Semiparametric Copula Models for Biometric Score Level

    NARCIS (Netherlands)

    Caselli, M.

    2016-01-01

    In biometric recognition systems, biometric samples (images of faces, finger- prints, voices, gaits, etc.) of people are compared and classifiers (matchers) indicate the level of similarity between any pair of samples by a score. If two samples of the same person are compared, a genuine score is

  14. Intelligence Score Profiles of Female Juvenile Offenders

    Science.gov (United States)

    Werner, Shelby Spare; Hart, Kathleen J.; Ficke, Susan L.

    2016-01-01

    Previous studies have found that male juvenile offenders typically obtain low scores on measures of intelligence, often with a pattern of higher scores on measures of nonverbal relative to verbal tasks. The research on the intelligence performance of female juvenile offenders is limited. This study explored the Wechsler Intelligence Scale for…

  15. [The use of scores in general medicine].

    Science.gov (United States)

    Huber, Ursula; Rösli, Andreas; Ballmer, Peter E; Rippin, Sarah Jane

    2013-10-01

    Scores are tools to combine complex information into a numerical value. In General Medicine, there are scores to assist in making diagnoses and prognoses, scores to assist therapeutic decision making and to evaluate therapeutic results and scores to help physicians when informing and advising patients. We review six of the scoring systems that have the greatest utility for the General Physician in hospital-based care and in General Practice. The Nutritional Risk Screening (NRS 2002) tool is designed to identify hospital patients in danger of malnutrition. The aim is to improve the nutritional status of these patients. The CURB-65 score predicts 30-day mortality in patients with community acquired pneumonia. Patients with a low score can be considered for home treatment, patients with an elevated score require hospitalisation and those with a high score should be treated as having severe pneumonia; treatment in the intensive care unit should be considered. The IAS-AGLA score of the Working Group on Lipids and Atherosclerosis of the Swiss Society of Cardiology calculates the 10-year risk of a myocardial infarction for people living in Switzerland. The working group makes recommendations for preventative treatment according to the calculated risk status. The Body Mass Index, which is calculated by dividing the body weight in kilograms by the height in meters squared and then divided into weight categories, is used to classify people as underweight, of normal weight, overweight or obese. The prognostic value of this classification is discussed. The Mini-Mental State Examination allows the physician to assess important cognitive functions in a simple and standardised form. The Glasgow Coma Scale is used to classify the level of consciousness in patients with head injury. It can be used for triage and correlates with prognosis.

  16. THE EFFICIENCY OF TENNIS DOUBLES SCORING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Geoff Pollard

    2010-09-01

    Full Text Available In this paper a family of scoring systems for tennis doubles for testing the hypothesis that pair A is better than pair B versus the alternative hypothesis that pair B is better than A, is established. This family or benchmark of scoring systems can be used as a benchmark against which the efficiency of any doubles scoring system can be assessed. Thus, the formula for the efficiency of any doubles scoring system is derived. As in tennis singles, one scoring system based on the play-the-loser structure is shown to be more efficient than the benchmark systems. An expression for the relative efficiency of two doubles scoring systems is derived. Thus, the relative efficiency of the various scoring systems presently used in doubles can be assessed. The methods of this paper can be extended to a match between two teams of 2, 4, 8, …doubles pairs, so that it is possible to establish a measure for the relative efficiency of the various systems used for tennis contests between teams of players.

  17. A comparison between modified Alvarado score and RIPASA score in the diagnosis of acute appendicitis.

    Science.gov (United States)

    Singla, Anand; Singla, Satpaul; Singh, Mohinder; Singla, Deeksha

    2016-12-01

    Acute appendicitis is a common but elusive surgical condition and remains a diagnostic dilemma. It has many clinical mimickers and diagnosis is primarily made on clinical grounds, leading to the evolution of clinical scoring systems for pin pointing the right diagnosis. The modified Alvarado and RIPASA scoring systems are two important scoring systems, for diagnosis of acute appendicitis. We prospectively compared the two scoring systems for diagnosing acute appendicitis in 50 patients presenting with right iliac fossa pain. The RIPASA score correctly classified 88 % of patients with histologically confirmed acute appendicitis compared with 48.0 % with modified Alvarado score, indicating that RIPASA score is more superior to Modified Alvarado score in our clinical settings.

  18. Facilitating the Interpretation of English Language Proficiency Scores: Combining Scale Anchoring and Test Score Mapping Methodologies

    Science.gov (United States)

    Powers, Donald; Schedl, Mary; Papageorgiou, Spiros

    2017-01-01

    The aim of this study was to develop, for the benefit of both test takers and test score users, enhanced "TOEFL ITP"® test score reports that go beyond the simple numerical scores that are currently reported. To do so, we applied traditional scale anchoring (proficiency scaling) to item difficulty data in order to develop performance…

  19. Surgical Apgar Score Predicts Post- Laparatomy Complications

    African Journals Online (AJOL)

    calculated Surgical Apgar Scores for 152 patients during a 6-month study ... major postoperative complications and/or death within. 30 days of ... respond to and control hemodynamic changes during a ... abdominal injury (18.42%). Intestinal ...

  20. Budget Scoring: An Impediment to Alternative Financing

    National Research Council Canada - National Science Library

    Summers, Donald E; San Miguel, Joseph G

    2007-01-01

    .... One of the major impediments to using alternative forms of procurement financing for acquiring defense capabilities is in the budgetary treatment, or scoring, of these initiatives by the Congressional Budget Office (CBO...

  1. Film scoring today - Theory, practice and analysis

    OpenAIRE

    Flach, Paula Sophie

    2012-01-01

    This thesis considers film scoring by taking a closer look at the theoretical discourse throughout the last decades, examining current production practice of film music and showcasing a musical analysis of the film Inception (2010).

  2. Climiate Resilience Screening Index and Domain Scores

    Data.gov (United States)

    U.S. Environmental Protection Agency — CRSI and related-domain scores for all 50 states and 3135 counties in the U.S. This dataset is not publicly accessible because: They are already available within the...

  3. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  4. Technology Performance Level (TPL) Scoring Tool

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Jochem [National Renewable Energy Lab. (NREL), Golden, CO (United States); Roberts, Jesse D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Costello, Ronan [Wave Venture, Penstraze (United Kingdom); Bull, Diana L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Babarit, Aurelien [Ecole Centrale de Nantes (France). Lab. of Research in Hydrodynamics, Energetics, and Atmospheric Environment (LHEEA); Neilson, Kim [Ramboll, Copenhagen (Denmark); Bittencourt, Claudio [DNV GL, London (United Kingdom); Kennedy, Ben [Wave Venture, Penstraze (United Kingdom)

    2016-09-01

    Three different ways of combining scores are used in the revised formulation. These are arithmetic mean, geometric mean and multiplication with normalisation. Arithmetic mean is used when combining scores that measure similar attributes, e.g. used for combining costs. The arithmetic mean has the property that it is similar to a logical OR, e.g. when combining costs it does not matter what the individual costs are only what the combined cost is. Geometric mean and Multiplication are used when combining scores that measure disparate attributes. Multiplication is similar to a logical AND, it is used to combine ‘must haves.’ As a result, this method is more punitive than the geometric mean; to get a good score in the combined result it is necessary to have a good score in ALL of the inputs. e.g. the different types of survivability are ‘must haves.’ On balance, the revised TPL is probably less punitive than the previous spreadsheet, multiplication is used sparingly as a method of combining scores. This is in line with the feedback of the Wave Energy Prize judges.

  5. GalaxyDock BP2 score: a hybrid scoring function for accurate protein-ligand docking

    Science.gov (United States)

    Baek, Minkyung; Shin, Woong-Hee; Chung, Hwan Won; Seok, Chaok

    2017-07-01

    Protein-ligand docking is a useful tool for providing atomic-level understanding of protein functions in nature and design principles for artificial ligands or proteins with desired properties. The ability to identify the true binding pose of a ligand to a target protein among numerous possible candidate poses is an essential requirement for successful protein-ligand docking. Many previously developed docking scoring functions were trained to reproduce experimental binding affinities and were also used for scoring binding poses. However, in this study, we developed a new docking scoring function, called GalaxyDock BP2 Score, by directly training the scoring power of binding poses. This function is a hybrid of physics-based, empirical, and knowledge-based score terms that are balanced to strengthen the advantages of each component. The performance of the new scoring function exhibits significant improvement over existing scoring functions in decoy pose discrimination tests. In addition, when the score is used with the GalaxyDock2 protein-ligand docking program, it outperformed other state-of-the-art docking programs in docking tests on the Astex diverse set, the Cross2009 benchmark set, and the Astex non-native set. GalaxyDock BP2 Score and GalaxyDock2 with this score are freely available at http://galaxy.seoklab.org/softwares/galaxydock.html.

  6. MODIFIED ALVARADO SCORING IN ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Varadarajan Sujath

    2016-12-01

    Full Text Available BACKGROUND Acute appendicitis is one of the most common surgical emergencies with a lifetime presentation of approximately 1 in 7. Its incidence is 1.5-1.9/1000 in males and females. Surgery for acute appendicitis is based on history, clinical examination and laboratory investigations (e.g. WBC count. Imaging techniques add very little to the efficacy in the diagnosis of appendix. A negative appendicectomy rate of 20-40% has been reported in literature. A difficulty in diagnosis is experienced in very young patients and females of reproductive age. The diagnostic accuracy in assessing acute appendicitis has not improved in spite of rapid advances in management. MATERIALS AND METHODS The modified Alvarado score was applied and assessed for its accuracy in preparation diagnosis of acute appendicitis in 50 patients. The aim of our study is to understand the various presentations of acute appendicitis including the age and gender incidence and the application of the modified Alvarado scoring system in our hospital setup and assessment of the efficacy of the score. RESULTS Our study shows that most involved age group is 3 rd decade with male preponderance. On application of Alvarado score, nausea and vomiting present in 50% and anorexia in 30%, leucocytosis was found in 75% of cases. Sensitivity and specificity of our study were 65% and 40% respectively with positive predictive value of 85% and negative predictive value of 15%. CONCLUSION This study showed that clinical scoring like the Alvarado score can be a cheap and quick tool to apply in emergency departments to rule out acute appendicitis. The implementation of modified Alvarado score is simple and cost effective.

  7. Heart valve surgery: EuroSCORE vs. EuroSCORE II vs. Society of Thoracic Surgeons score

    Directory of Open Access Journals (Sweden)

    Muhammad Sharoz Rabbani

    2014-12-01

    Full Text Available Background This is a validation study comparing the European System for Cardiac Operative Risk Evaluation (EuroSCORE II with the previous additive (AES and logistic EuroSCORE (LES and the Society of Thoracic Surgeons’ (STS risk prediction algorithm, for patients undergoing valve replacement with or without bypass in Pakistan. Patients and Methods Clinical data of 576 patients undergoing valve replacement surgery between 2006 and 2013 were retrospectively collected and individual expected risks of death were calculated by all four risk prediction algorithms. Performance of these risk algorithms was evaluated in terms of discrimination and calibration. Results There were 28 deaths (4.8% among 576 patients, which was lower than the predicted mortality of 5.16%, 6.96% and 4.94% by AES, LES and EuroSCORE II but was higher than 2.13% predicted by STS scoring system. For single and double valve replacement procedures, EuroSCORE II was the best predictor of mortality with highest Hosmer and Lemmeshow test (H-L p value (0.346 to 0.689 and area under the receiver operating characteristic (ROC curve (0.637 to 0.898. For valve plus concomitant coronary artery bypass grafting (CABG patients actual mortality was 1.88%. STS calculator came out to be the best predictor of mortality for this subgroup with H-L p value (0.480 to 0.884 and ROC (0.657 to 0.775. Conclusions For Pakistani population EuroSCORE II is an accurate predictor for individual operative risk in patients undergoing isolated valve surgery, whereas STS performs better in the valve plus CABG group.

  8. WebScore: An Effective Page Scoring Approach for Uncertain Web Social Networks

    Directory of Open Access Journals (Sweden)

    Shaojie Qiao

    2011-10-01

    Full Text Available To effectively score pages with uncertainty in web social networks, we first proposed a new concept called transition probability matrix and formally defined the uncertainty in web social networks. Second, we proposed a hybrid page scoring algorithm, called WebScore, based on the PageRank algorithm and three centrality measures including degree, betweenness, and closeness. Particularly,WebScore takes into a full consideration of the uncertainty of web social networks by computing the transition probability from one page to another. The basic idea ofWebScore is to: (1 integrate uncertainty into PageRank in order to accurately rank pages, and (2 apply the centrality measures to calculate the importance of pages in web social networks. In order to verify the performance of WebScore, we developed a web social network analysis system which can partition web pages into distinct groups and score them in an effective fashion. Finally, we conducted extensive experiments on real data and the results show that WebScore is effective at scoring uncertain pages with less time deficiency than PageRank and centrality measures based page scoring algorithms.

  9. Gambling scores for earthquake predictions and forecasts

    Science.gov (United States)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  10. Quality scores for 32,000 genomes

    DEFF Research Database (Denmark)

    Land, Miriam L.; Hyatt, Doug; Jun, Se-Ran

    2014-01-01

    Background More than 80% of the microbial genomes in GenBank are of ‘draft’ quality (12,553 draft vs. 2,679 finished, as of October, 2013). We have examined all the microbial DNA sequences available for complete, draft, and Sequence Read Archive genomes in GenBank as well as three other major...... public databases, and assigned quality scores for more than 30,000 prokaryotic genome sequences. Results Scores were assigned using four categories: the completeness of the assembly, the presence of full-length rRNA genes, tRNA composition and the presence of a set of 102 conserved genes in prokaryotes....... Most (~88%) of the genomes had quality scores of 0.8 or better and can be safely used for standard comparative genomics analysis. We compared genomes across factors that may influence the score. We found that although sequencing depth coverage of over 100x did not ensure a better score, sequencing read...

  11. An ultrasound score for knee osteoarthritis

    DEFF Research Database (Denmark)

    Riecke, B F; Christensen, R.; Torp-Pedersen, S

    2014-01-01

    OBJECTIVE: To develop standardized musculoskeletal ultrasound (MUS) procedures and scoring for detecting knee osteoarthritis (OA) and test the MUS score's ability to discern various degrees of knee OA, in comparison with plain radiography and the 'Knee injury and Osteoarthritis Outcome Score' (KOOS......) domains as comparators. METHOD: A cross-sectional study of MUS examinations in 45 patients with knee OA. Validity, reliability, and reproducibility were evaluated. RESULTS: MUS examination for knee OA consists of five separate domains assessing (1) predominantly morphological changes in the medial...... coefficients ranging from 0.75 to 0.97 for the five domains. Construct validity was confirmed with statistically significant correlation coefficients (0.47-0.81, P knee OA. In comparison with standing radiographs...

  12. Assigning Numerical Scores to Linguistic Expressions

    Directory of Open Access Journals (Sweden)

    María Jesús Campión

    2017-07-01

    Full Text Available In this paper, we study different methods of scoring linguistic expressions defined on a finite set, in the search for a linear order that ranks all those possible expressions. Among them, particular attention is paid to the canonical extension, and its representability through distances in a graph plus some suitable penalization of imprecision. The relationship between this setting and the classical problems of numerical representability of orderings, as well as extension of orderings from a set to a superset is also explored. Finally, aggregation procedures of qualitative rankings and scorings are also analyzed.

  13. What Do Test Scores Really Mean? A Latent Class Analysis of Danish Test Score Performance

    DEFF Research Database (Denmark)

    Munk, Martin D.; McIntosh, James

    2014-01-01

    Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55, tested in 1968, and followed until 2011. The procedure takes account of unobservable effects as well as excessive zeros in the data. We show that the test scores...... of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture and possible incentive problems make it more di¢ cult to understand what the tests measure....

  14. NCACO-score: An effective main-chain dependent scoring function for structure modeling

    Directory of Open Access Journals (Sweden)

    Dong Xiaoxi

    2011-05-01

    Full Text Available Abstract Background Development of effective scoring functions is a critical component to the success of protein structure modeling. Previously, many efforts have been dedicated to the development of scoring functions. Despite these efforts, development of an effective scoring function that can achieve both good accuracy and fast speed still presents a grand challenge. Results Based on a coarse-grained representation of a protein structure by using only four main-chain atoms: N, Cα, C and O, we develop a knowledge-based scoring function, called NCACO-score, that integrates different structural information to rapidly model protein structure from sequence. In testing on the Decoys'R'Us sets, we found that NCACO-score can effectively recognize native conformers from their decoys. Furthermore, we demonstrate that NCACO-score can effectively guide fragment assembly for protein structure prediction, which has achieved a good performance in building the structure models for hard targets from CASP8 in terms of both accuracy and speed. Conclusions Although NCACO-score is developed based on a coarse-grained model, it is able to discriminate native conformers from decoy conformers with high accuracy. NCACO is a very effective scoring function for structure modeling.

  15. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  16. Direct approach for solving nonlinear evolution and two-point ...

    Indian Academy of Sciences (India)

    2013-12-01

    Dec 1, 2013 ... 1School of Mathematics and Applied Statistics, University of Wollongong, Wollongong,. NSW 2522 ... the nonlinear phenomena as well as their further applications in the real-life situations, it is ... concentration gradient. Thus ...

  17. Direct approach for solving nonlinear evolution and two-point

    Indian Academy of Sciences (India)

    Time-delayed nonlinear evolution equations and boundary value problems have a wide range of applications in science and engineering. In this paper, we implement the differential transform method to solve the nonlinear delay differential equation and boundary value problems. Also, we present some numerical examples ...

  18. A two-point correlation function for Galactic halo stars

    NARCIS (Netherlands)

    Cooper, A. P.; Cole, S.; Frenk, C. S.; Helmi, A.

    2011-01-01

    We describe a correlation function statistic that quantifies the amount of spatial and kinematic substructure in the stellar halo. We test this statistic using model stellar halo realizations constructed from the Aquarius suite of six high-resolution cosmological N-body simulations, in combination

  19. PHOTOJOURNALISM AND PROXIMITY IMAGES: two points of view, two professions?

    Directory of Open Access Journals (Sweden)

    Daniel Thierry

    2011-06-01

    Full Text Available For many decades, classic photojournalistic practice, firmly anchored in a creed established since Lewis Hine (1874-1940, has developed a praxis and a doxa that have barely been affected by the transformations in the various types of journalism. From the search for the “right image” which would be totally transparent by striving to refute its enunciative features from a perspective of maximumobjectivity, to the most seductive photography at supermarkets by photo agencies, the range of images seems to be decidedly framed. However, far from constituting high-powered reportingor excellent photography that is rewarded with numerous international prizes and invitations to the media-artistic world, local press photography remains in the shadows. How does oneoffer a representation of one’s self that can be shared in the local sphere? That is the first question which editors of the local daily and weekly press must grapple with. Using illustrations of the practices, this article proposes an examination of the origins ofthese practices and an analysis grounded on the originality of theauthors of these proximity photographs.

  20. Scoring ultrasound synovitis in rheumatoid arthritis

    DEFF Research Database (Denmark)

    D'Agostino, Maria-Antonietta; Terslev, Lene; Aegerter, Philippe

    2017-01-01

    OBJECTIVES: To develop a consensus-based ultrasound (US) definition and quantification system for synovitis in rheumatoid arthritis (RA). METHODS: A multistep, iterative approach was used to: (1) evaluate the baseline agreement on defining and scoring synovitis according to the usual practice...

  1. Multilevel Analysis of Student Civics Knowledge Scores

    Science.gov (United States)

    Gregory, Chris; Miyazaki, Yasuo

    2018-01-01

    Compositional effects of scholarly culture classroom/school climate on civic knowledge scores of 9th graders in the United States were examined using the International Association for the Evaluation of Educational Achievement (IEA) 1999 Civic Education Study data. Following Evans et al. (2010, 2014), we conceived that the number of books at home,…

  2. Normalization of the Psychometric Hepatic Encephalopathy score ...

    African Journals Online (AJOL)

    2016-05-09

    May 9, 2016 ... influenced by age, education levels, and gender.[5] Till date, the PHES ... and death. MHE also increases the risk of development ... large circles beginning from each row on the left and working to the right. The test score is the ...

  3. SCORE - Sounding-rocket Coronagraphic Experiment

    Science.gov (United States)

    Fineschi, Silvano; Moses, Dan; Romoli, Marco

    The Sounding-rocket Coronagraphic Experiment - SCORE - is a The Sounding-rocket Coronagraphic Experiment - SCORE - is a coronagraph for multi-wavelength imaging of the coronal Lyman-alpha lines, HeII 30.4 nm and HI 121.6 nm, and for the broad.band visible-light emission of the polarized K-corona. SCORE has flown successfully in 2009 acquiring the first images of the HeII line-emission from the extended corona. The simultaneous observation of the coronal Lyman-alpha HI 121.6 nm, has allowed the first determination of the absolute helium abundance in the extended corona. This presentation will describe the lesson learned from the first flight and will illustrate the preparations and the science perspectives for the second re-flight approved by NASA and scheduled for 2016. The SCORE optical design is flexible enough to be able to accommodate different experimental configurations with minor modifications. This presentation will describe one of such configurations that could include a polarimeter for the observation the expected Hanle effect in the coronal Lyman-alpha HI line. The linear polarization by resonance scattering of coronal permitted line-emission in the ultraviolet (UV) can be modified by magnetic fields through the Hanle effect. Thus, space-based UV spectro-polarimetry would provide an additional new tool for the diagnostics of coronal magnetism.

  4. Effects of heterogeneity on bank efficiency scores

    NARCIS (Netherlands)

    Bos, J. W. B.; Koetter, M.; Kolari, J. W.; Kool, C. J. M.

    2009-01-01

    Bank efficiency estimates often serve as a proxy of managerial skill since they quantify sub-optimal production choices. But such deviations can also be due to omitted systematic differences among banks. In this study, we examine the effects of heterogeneity on bank efficiency scores. We compare

  5. Correlation between International Prostate Symptom Score and ...

    African Journals Online (AJOL)

    2016-07-23

    Jul 23, 2016 ... International Prostate Symptom Score (IPSS) and uroflowmetry in patients with lower urinary tract symptoms-benign prostatic ... cause of bladder outlet obstruction (BOO) in the male geriatric population.[1] ... age and results in LUTS in about 10% of elderly men.[1]. BPH causes morbidity through the urinary ...

  6. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  7. The scoring of movements in sleep.

    Science.gov (United States)

    Walters, Arthur S; Lavigne, Gilles; Hening, Wayne; Picchietti, Daniel L; Allen, Richard P; Chokroverty, Sudhansu; Kushida, Clete A; Bliwise, Donald L; Mahowald, Mark W; Schenck, Carlos H; Ancoli-Israel, Sonia

    2007-03-15

    The International Classification of Sleep Disorders (ICSD-2) has separated sleep-related movement disorders into simple, repetitive movement disorders (such as periodic limb movements in sleep [PLMS], sleep bruxism, and rhythmic movement disorder) and parasomnias (such as REM sleep behavior disorder and disorders of partial arousal, e.g., sleep walking, confusional arousals, night terrors). Many of the parasomnias are characterized by complex behaviors in sleep that appear purposeful, goal directed and voluntary but are outside the conscious awareness of the individual and therefore inappropriate. All of the sleep-related movement disorders described here have specific polysomnographic findings. For the purposes of developing and/or revising specifications and polysomnographic scoring rules, the AASM Scoring Manual Task Force on Movements in Sleep reviewed background literature and executed evidence grading of 81 relevant articles obtained by a literature search of published articles between 1966 and 2004. Subsequent evidence grading identified limited evidence for reliability and/or validity for polysomnographic scoring criteria for periodic limb movements in sleep, REM sleep behavior disorder, and sleep bruxism. Published scoring criteria for rhythmic movement disorder, excessive fragmentary myoclonus, and hypnagogic foot tremor/alternating leg muscle activation were empirical and based on descriptive studies. The literature review disclosed no published evidence defining clinical consequences of excessive fragmentary myoclonus or hypnagogic foot tremor/alternating leg muscle activation. Because of limited or absent evidence for reliability and/or validity, a standardized RAND/UCLA consensus process was employed for recommendation of specific rules for the scoring of sleep-associated movements.

  8. Validation of dengue infection severity score

    Directory of Open Access Journals (Sweden)

    Pongpan S

    2014-03-01

    Full Text Available Surangrat Pongpan,1,2 Jayanton Patumanond,3 Apichart Wisitwong,4 Chamaiporn Tawichasri,5 Sirianong Namwongprom1,6 1Clinical Epidemiology Program, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand; 2Department of Occupational Medicine, Phrae Hospital, Phrae, Thailand; 3Clinical Epidemiology Program, Faculty of Medicine, Thammasat University, Bangkok, Thailand; 4Department of Social Medicine, Sawanpracharak Hospital, Nakorn Sawan, Thailand; 5Clinical Epidemiology Society at Chiang Mai, Chiang Mai, Thailand; 6Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Objective: To validate a simple scoring system to classify dengue viral infection severity to patients in different settings. Methods: The developed scoring system derived from 777 patients from three tertiary-care hospitals was applied to 400 patients in the validation data obtained from another three tertiary-care hospitals. Percentage of correct classification, underestimation, and overestimation was compared. The score discriminative performance in the two datasets was compared by analysis of areas under the receiver operating characteristic curves. Results: Patients in the validation data were different from those in the development data in some aspects. In the validation data, classifying patients into three severity levels (dengue fever, dengue hemorrhagic fever, and dengue shock syndrome yielded 50.8% correct prediction (versus 60.7% in the development data, with clinically acceptable underestimation (18.6% versus 25.7% and overestimation (30.8% versus 13.5%. Despite the difference in predictive performances between the validation and the development data, the overall prediction of the scoring system is considered high. Conclusion: The developed severity score may be applied to classify patients with dengue viral infection into three severity levels with clinically acceptable under- or overestimation. Its impact when used in routine

  9. Development of the Crohn's disease digestive damage score, the Lémann score

    DEFF Research Database (Denmark)

    Pariente, Benjamin; Cosnes, Jacques; Danese, Silvio

    2011-01-01

    is to outline the methods to develop an instrument that can measure cumulative bowel damage. The project is being conducted by the International Program to develop New Indexes in Crohn's disease (IPNIC) group. This instrument, called the Crohn's Disease Digestive Damage Score (the Lémann score), should take...

  10. Relationship between Students' Scores on Research Methods and Statistics, and Undergraduate Project Scores

    Science.gov (United States)

    Ossai, Peter Agbadobi Uloku

    2016-01-01

    This study examined the relationship between students' scores on Research Methods and statistics, and undergraduate project at the final year. The purpose was to find out whether students matched knowledge of research with project-writing skill. The study adopted an expost facto correlational design. Scores on Research Methods and Statistics for…

  11. Connecting KOSs and the LOD Cloud

    NARCIS (Netherlands)

    Szostak, Rick; Scharnhorst, Andrea; Beek, Wouter; Smiraglia, Richard P.

    2018-01-01

    This paper describes a specific project, the current situation leading to it, its project design and first results. In particular, we will examine the terminology employed in the Linked Open Data cloud and compare this to the terminology employed in both the Universal Decimal Classification and the

  12. Guess LOD approach: sufficient conditions for robustness.

    Science.gov (United States)

    Williamson, J A; Amos, C I

    1995-01-01

    Analysis of genetic linkage between a disease and a marker locus requires specifying a genetic model describing both the inheritance pattern and the gene frequencies of the marker and trait loci. Misspecification of the genetic model is likely for etiologically complex diseases. In previous work we have shown through analytic studies that misspecifying the genetic model for disease inheritance does not lead to excess false-positive evidence for genetic linkage provided the genetic marker alleles of all pedigree members are known, or can be inferred without bias from the data. Here, under various selection or ascertainment schemes we extend these previous results to situations in which the genetic model for the marker locus may be incorrect. We provide sufficient conditions for the asymptotic unbiased estimation of the recombination fraction under the null hypothesis of no linkage, and also conditions for the limiting distribution of the likelihood ratio test for no linkage to be chi-squared. Through simulation studies we document some situations under which asymptotic bias can result when the genetic model is misspecified. Among those situations under which an excess of false-positive evidence for genetic linkage can be generated, the most common is failure to provide accurate estimates of the marker allele frequencies. We show that in most cases false-positive evidence for genetic linkage is unlikely to result solely from the misspecification of the genetic model for disease or trait inheritance.

  13. Automatic building LOD copies for multitextured objects

    Science.gov (United States)

    Souetov, Andrew E.

    2000-01-01

    This article is dedicated to the research of geometry level of detail technology for systems of real-time 3D visualization. The article includes the conditions of applicability of the method, overview of existing approaches, their drawbacks and advantages. New technology guidelines are suggested as an alternative to existing methods.

  14. The RIPASA score for the diagnosis of acute appendicitis: A comparison with the modified Alvarado score.

    Science.gov (United States)

    Díaz-Barrientos, C Z; Aquino-González, A; Heredia-Montaño, M; Navarro-Tovar, F; Pineda-Espinosa, M A; Espinosa de Santillana, I A

    2018-02-06

    Acute appendicitis is the first cause of surgical emergencies. It is still a difficult diagnosis to make, especially in young persons, the elderly, and in reproductive-age women, in whom a series of inflammatory conditions can have signs and symptoms similar to those of acute appendicitis. Different scoring systems have been created to increase diagnostic accuracy, and they are inexpensive, noninvasive, and easy to use and reproduce. The modified Alvarado score is probably the most widely used and accepted in emergency services worldwide. On the other hand, the RIPASA score was formulated in 2010 and has greater sensitivity and specificity. There are very few studies conducted in Mexico that compare the different scoring systems for appendicitis. The aim of our article was to compare the modified Alvarado score and the RIPASA score in the diagnosis of patients with abdominal pain and suspected acute appendicitis. An observational, analytic, and prolective study was conducted within the time frame of July 2002 and February 2014 at the Hospital Universitario de Puebla. The questionnaires used for the evaluation process were applied to the patients suspected of having appendicitis. The RIPASA score with 8.5 as the optimal cutoff value: ROC curve (area .595), sensitivity (93.3%), specificity (8.3%), PPV (91.8%), NPV (10.1%). Modified Alvarado score with 6 as the optimal cutoff value: ROC curve (area .719), sensitivity (75%), specificity (41.6%), PPV (93.7%), NPV (12.5%). The RIPASA score showed no advantages over the modified Alvarado score when applied to patients presenting with suspected acute appendicitis. Copyright © 2018 Asociación Mexicana de Gastroenterología. Publicado por Masson Doyma México S.A. All rights reserved.

  15. Lower bounds to the reliabilities of factor score estimators

    NARCIS (Netherlands)

    Hessen, D.J.

    2017-01-01

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score

  16. A simple weighted scoring system to guide surgical decision-making in patients with parapneumonic pleural effusion.

    Science.gov (United States)

    Chang, Che-Chia; Chen, Tzu-Ping; Yeh, Chi-Hsiao; Huang, Pin-Fu; Wang, Yao-Chang; Yin, Shun-Ying

    2016-11-01

    The selection of ideal candidates for surgical intervention among patients with parapneumonic pleural effusion remains challenging. In this retrospective study, we sought to identify the main predictors of surgical treatment and devise a simple scoring system to guide surgical decision-making. Between 2005 and 2014, we identified 276 patients with parapneumonic pleural effusion. Patients in the training set (n=201) were divided into two groups according to their treatment modality (non-surgery vs. surgery). Using multivariable logistic regression analysis, we devised a scoring system to guide surgical decision-making. The score was subsequently validated in an independent set of 75 patients. A white blood cell count >13,500/µL, pleuritic pain, loculations, and split pleura sign were identified as independent predictors of surgical treatment. A weighted score based on these factors was devised, as follows: white blood cell count >13,500/µL (one point), pleuritic pain (one point), loculations (two points), and split pleura sign (three points). A score >4 was associated with a surgical approach with a sensitivity of 93.4%, a specificity of 82.4%, and an area under curve (AUC) of 0.879 (95% confidence interval: 0.828-0.930). In the validation set, a sensitivity of 94.3% and a specificity of 79.6% were found (AUC=0.869). The proposed scoring system reliably identifies patients with parapneumonic pleural effusion who are candidates for surgery. Pending independent external validation, our score may inform the appropriate use of surgical interventions in this clinical setting.

  17. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    The theoretical literature has a rich characterization of scoring rules for eliciting the subjective beliefs that an individual has for continuous events, but under the restrictive assumption of risk neutrality. It is well known that risk aversion can dramatically affect the incentives to correctly...... report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...... to distort reports. We characterize the comparable implications of the general case of a risk averse agent when facing a popular scoring rule over continuous events, and find that these concerns do not apply with anything like the same force. For empirically plausible levels of risk aversion, one can...

  18. Shower reconstruction in TUNKA-HiSCORE

    Energy Technology Data Exchange (ETDEWEB)

    Porelli, Andrea; Wischnewski, Ralf [DESY-Zeuthen, Platanenallee 6, 15738 Zeuthen (Germany)

    2015-07-01

    The Tunka-HiSCORE detector is a non-imaging wide-angle EAS cherenkov array designed as an alternative technology for gamma-ray physics above 10 TeV and to study spectrum and composition of cosmic rays above 100 TeV. An engineering array with nine stations (HiS-9) has been deployed in October 2013 on the site of the Tunka experiment in Russia. In November 2014, 20 more HiSCORE stations have been installed, covering a total array area of 0.24 square-km. We describe the detector setup, the role of precision time measurement, and give results from the innovative WhiteRabbit time synchronization technology. Results of air shower reconstruction are presented and compared with MC simulations, for both the HiS-9 and the HiS-29 detector arrays.

  19. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  20. Nursing Activities Score and Acute Kidney Injury

    Directory of Open Access Journals (Sweden)

    Filipe Utuari de Andrade Coelho

    Full Text Available ABSTRACT Objective: to evaluate the nursing workload in intensive care patients with acute kidney injury (AKI. Method: A quantitative study, conducted in an intensive care unit, from April to August of 2015. The Nursing Activities Score (NAS and Kidney Disease Improving Global Outcomes (KDIGO were used to measure nursing workload and to classify the stage of AKI, respectively. Results: A total of 190 patients were included. Patients who developed AKI (44.2% had higher NAS when compared to those without AKI (43.7% vs 40.7%, p <0.001. Patients with stage 1, 2 and 3 AKI showed higher NAS than those without AKI. A relationship was identified between stage 2 and 3 with those without AKI (p = 0.002 and p <0.001. Conclusion: The NAS was associated with the presence of AKI, the score increased with the progression of the stages, and it was associated with AKI, stage 2 and 3.

  1. Psychometric properties of the Cumulated Ambulation Score

    DEFF Research Database (Denmark)

    Ferriero, Giorgio; Kristensen, Morten T; Invernizzi, Marco

    2018-01-01

    INTRODUCTION: In the geriatric population, independent mobility is a key factor in determining readiness for discharge following acute hospitalization. The Cumulated Ambulation Score (CAS) is a potentially valuable score that allows day-to-day measurements of basic mobility. The CAS was developed...... and validated in older patients with hip fracture as an early postoperative predictor of short-term outcome, but it is also used to assess geriatric in-patients with acute medical illness. Despite the fast- accumulating literature on the CAS, to date no systematic review synthesizing its psychometric properties....... Of 49 studies identified, 17 examined the psychometric properties of the CAS. EVIDENCE SYNTHESIS: Most papers dealt with patients after hip fracture surgery, and only 4 studies assessed the CAS psychometric characteristics also in geriatric in-patients with acute medical illness. Two versions of CAS...

  2. PhishScore: Hacking Phishers' Minds

    OpenAIRE

    Marchal, Samuel; François, Jérôme; State, Radu; Engel, Thomas

    2014-01-01

    Despite the growth of prevention techniques, phishing remains an important threat since the principal countermeasures in use are still based on reactive URL blacklisting. This technique is inefficient due to the short lifetime of phishing Web sites, making recent approaches relying on real-time or proactive phishing URLs detection techniques more appropriate. In this paper we introduce PhishScore, an automated real-time phishing detection system. We observed that phishing URLs usually have fe...

  3. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  4. Fingerprint Recognition Using Minutia Score Matching

    OpenAIRE

    J, Ravi.; Raja, K. B.; R, Venugopal. K.

    2010-01-01

    The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae ...

  5. Gender, Stereotype Threat and Mathematics Test Scores

    OpenAIRE

    Ming Tsui; Xiao Y. Xu; Edmond Venator

    2011-01-01

    Problem statement: Stereotype threat has repeatedly been shown to depress womens scores on difficult math tests. An attempt to replicate these findings in China found no support for the stereotype threat hypothesis. Our math test was characterized as being personally important for the student participants, an atypical condition in most stereotype threat laboratory research. Approach: To evaluate the effects of this personal demand, we conducted three experiments. Results: ...

  6. MODELING CREDIT RISK THROUGH CREDIT SCORING

    OpenAIRE

    Adrian Cantemir CALIN; Oana Cristina POPOVICI

    2014-01-01

    Credit risk governs all financial transactions and it is defined as the risk of suffering a loss due to certain shifts in the credit quality of a counterpart. Credit risk literature gravitates around two main modeling approaches: the structural approach and the reduced form approach. In addition to these perspectives, credit risk assessment has been conducted through a series of techniques such as credit scoring models, which form the traditional approach. This paper examines the evolution of...

  7. Superior cold recycling : The score project

    OpenAIRE

    LESUEUR, D; POTTI, JJ; SOUTHWELL, C; WALTER, J; CRUZ, M; DELFOSSE, F; ECKMANN, B; FIEDLER, J; RACEK, I; SIMONSSON, B; PLACIN, F; SERRANO, J; RUIZ, A; KALAAJI, A; ATTANE, P

    2004-01-01

    In order to develop Environmentally Friendly Construction Technologies (EFCT) and as part of the 5th Framework Program of Research and Development, the European Community has decided to finance a research project on cold recycling, entitled SCORE "Superior COld REcycling based on benefits of bituminous microemulsions and foamed bitumen. A EFCT system for the rehabilitation and the maintenance of roads". This research project gathers organizations from all over Europe, from industrial partners...

  8. North Korean refugee doctors' preliminary examination scores

    Directory of Open Access Journals (Sweden)

    Sung Uk Chae

    2016-12-01

    Full Text Available Purpose Although there have been studies emphasizing the re-education of North Korean (NK doctors for post-unification of the Korean Peninsula, study on the content and scope of such re-education has yet to be conducted. Researchers intended to set the content and scope of re-education by a comparative analysis for the scores of the preliminary examination, which is comparable to the Korean Medical Licensing Examination (KMLE. Methods The scores of the first and second preliminary exams were analyzed by subject using the Wilcoxon signed rank test. The passing status of the group of NK doctors for KMLE in recent 3 years were investigated. The multiple-choice-question (MCQ items of which difficulty indexes of NK doctors were lower than those of South Korean (SK medical students by two times of the standard deviation of the scores of SK medical students were selected to investigate the relevant reasons. Results The average scores of nearly all subjects were improved in the second exam compared with the first exam. The passing rate of the group of NK doctors was 75%. The number of MCQ items of which difficulty indexes of NK doctors were lower than those of SK medical students was 51 (6.38%. NK doctors’ lack of understandings for Diagnostic Techniques and Procedures, Therapeutics, Prenatal Care, and Managed Care Programs was suggested as the possible reason. Conclusion The education of integrated courses focusing on Diagnostic Techniques and Procedures and Therapeutics, and apprenticeship-style training for clinical practice of core subjects are needed. Special lectures on the Preventive Medicine are likely to be required also.

  9. What do educational test scores really measure?

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55 and tested in 1968. The procedure takes account of unobservable effects as well as excessive zeros in the data. The bulk of unobservable effects are uncorrelate......, and possible incentive problems make it more difficult to elicit true values of what the tests measure....

  10. Wearable PPG sensor based alertness scoring system.

    Science.gov (United States)

    Dey, Jishnu; Bhowmik, Tanmoy; Sahoo, Saswata; Tiwari, Vijay Narayan

    2017-07-01

    Quantifying mental alertness in today's world is important as it enables the person to adopt lifestyle changes for better work efficiency. Miniaturized sensors in wearable devices have facilitated detection/monitoring of mental alertness. Photoplethysmography (PPG) sensors through Heart Rate Variability (HRV) offer one such opportunity by providing information about one's daily alertness levels without requiring any manual interference from the user. In this paper, a smartwatch based alertness estimation system is proposed. Data collected from PPG sensor of smartwatch is processed and fed to machine learning based model to get a continuous alertness score. Utility functions are designed based on statistical analysis to give a quality score on different stages of alertness such as awake, long sleep and short duration power nap. An intelligent data collection approach is proposed in collaboration with the motion sensor in the smartwatch to reduce battery drainage. Overall, our proposed wearable based system provides a detailed analysis of alertness over a period in a systematic and optimized manner. We were able to achieve an accuracy of 80.1% for sleep/awake classification along with alertness score. This opens up the possibility for quantifying alertness levels using a single PPG sensor for better management of health related activities including sleep.

  11. High throughput sample processing and automated scoring

    Directory of Open Access Journals (Sweden)

    Gunnar eBrunborg

    2014-10-01

    Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.

  12. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  13. Resiliency scoring for business continuity plans.

    Science.gov (United States)

    Olson, Anna; Anderson, Jamie

    Through this paper readers will learn of a scoring methodology, referred to as resiliency scoring, which enables the evaluation of business continuity plans based upon analysis of their alignment with a predefined set of criteria that can be customised and are adaptable to the needs of any organisation. This patent pending tool has been successful in driving engagement and is a powerful resource to improve reporting capabilities, identify risks and gauge organisational resilience. The role of business continuity professionals is to aid their organisations in planning and preparedness activities aimed at mitigating the impacts of potential disruptions and ensuring critical business functions can continue in the event of unforeseen circumstances. This may seem like a daunting task for what can typically be a small team of individuals. For this reason, it is important to be able to leverage industry standards, documented best practices and effective tools to streamline and support your continuity programme. The resiliency scoring methodology developed and implemented at Target has proven to be a valuable tool in taking the organisation's continuity programme to the next level. This paper will detail how the tool was developed and provide guidance on how it can be customised to fit your organisation's unique needs.

  14. Soetomo score: score model in early identification of acute haemorrhagic stroke

    Directory of Open Access Journals (Sweden)

    Moh Hasan Machfoed

    2016-06-01

    Full Text Available Aim of the study: On financial or facility constraints of brain imaging, score model is used to predict the occurrence of acute haemorrhagic stroke. Accordingly, this study attempts to develop a new score model, called Soetomo score. Material and methods: The researchers performed a cross-sectional study of 176 acute stroke patients with onset of ≤24 hours who visited emergency unit of Dr. Soetomo Hospital from July 14th to December 14th, 2014. The diagnosis of haemorrhagic stroke was confirmed by head computed tomography scan. There were seven predictors of haemorrhagic stroke which were analysed by using bivariate and multivariate analyses. Furthermore, a multiple discriminant analysis resulted in an equation of Soetomo score model. The receiver operating characteristic procedure resulted in the values of area under curve and intersection point identifying haemorrhagic stroke. Afterward, the diagnostic test value was determined. Results: The equation of Soetomo score model was (3 × loss of consciousness + (3.5 × headache + (4 × vomiting − 4.5. Area under curve value of this score was 88.5% (95% confidence interval = 83.3–93.7%. In the Soetomo score model value of ≥−0.75, the score reached the sensitivity of 82.9%, specificity of 83%, positive predictive value of 78.8%, negative predictive value of 86.5%, positive likelihood ratio of 4.88, negative likelihood ratio of 0.21, false negative of 17.1%, false positive of 17%, and accuracy of 83%. Conclusions: The Soetomo score model value of ≥−0.75 can identify acute haemorrhagic stroke properly on the financial or facility constrains of brain imaging.

  15. Methods to score vertebral deformities in patients with rheumatoid arthritis

    NARCIS (Netherlands)

    Lems, W. F.; Jahangier, Z. N.; Raymakers, J. A.; Jacobs, J. W.; Bijlsma, J. W.

    1997-01-01

    The objective was to compare four different scoring methods for vertebral deformities: the semiquantitative Kleerekoper score and three quantitative scores (according to Minne, Melton and Raymakers) in patients with rheumatoid arthritis (RA). Lateral radiographs of the thoracic and lumbar vertebral

  16. siMS Score: Simple Method for Quantifying Metabolic Syndrome.

    Science.gov (United States)

    Soldatovic, Ivan; Vukovic, Rade; Culafic, Djordje; Gajic, Milan; Dimitrijevic-Sreckovic, Vesna

    2016-01-01

    To evaluate siMS score and siMS risk score, novel continuous metabolic syndrome scores as methods for quantification of metabolic status and risk. Developed siMS score was calculated using formula: siMS score = 2*Waist/Height + Gly/5.6 + Tg/1.7 + TAsystolic/130-HDL/1.02 or 1.28 (for male or female subjects, respectively). siMS risk score was calculated using formula: siMS risk score = siMS score * age/45 or 50 (for male or female subjects, respectively) * family history of cardio/cerebro-vascular events (event = 1.2, no event = 1). A sample of 528 obese and non-obese participants was used to validate siMS score and siMS risk score. Scores calculated as sum of z-scores (each component of metabolic syndrome regressed with age and gender) and sum of scores derived from principal component analysis (PCA) were used for evaluation of siMS score. Variants were made by replacing glucose with HOMA in calculations. Framingham score was used for evaluation of siMS risk score. Correlation between siMS score with sum of z-scores and weighted sum of factors of PCA was high (r = 0.866 and r = 0.822, respectively). Correlation between siMS risk score and log transformed Framingham score was medium to high for age groups 18+,30+ and 35+ (0.835, 0.707 and 0.667, respectively). siMS score and siMS risk score showed high correlation with more complex scores. Demonstrated accuracy together with superior simplicity and the ability to evaluate and follow-up individual patients makes siMS and siMS risk scores very convenient for use in clinical practice and research as well.

  17. Standardized UXO Demonstration Site Blind Grid Scoring Record No. 690

    National Research Council Canada - National Science Library

    Overbay, Larry, Jr; Archiable, Robert; McClung, Christina; Robitaille, George

    2005-01-01

    ...) utilizing the YPG Standardized UXO Technology Demonstration Site Blind Grid. The scoring record was coordinated by Larry Overbay and by the Standardized UXO Technology Demonstration Scoring Committee...

  18. Standardized UXO Technology Demonstration Site Blind Grid Scoring Record #833

    National Research Council Canada - National Science Library

    Fling, Rick; McClung, Christina; Burch, William; McDonnell, Patrick

    2007-01-01

    ...) utilizing the APG Standardized UXO Technology Demonstration Site Blind Grid. This Scoring Record was coordinated by Dennis Teefy and the Standardized UXO Technology Demonstration Site Scoring Committee...

  19. Standardized UXO Technology Demonstration Site, Woods Scoring Record Number 486

    National Research Council Canada - National Science Library

    Overbay, Larry; Robitaille, George

    2005-01-01

    ...) utilizing the APG Standardized UXO Technology Demonstration Site Open Field. The scoring record was coordinated by Larry Overbay and the Standardized UXO Technology Demonstration Site Scoring Committee...

  20. siMS Score: Simple Method for Quantifying Metabolic Syndrome

    OpenAIRE

    Soldatovic, Ivan; Vukovic, Rade; Culafic, Djordje; Gajic, Milan; Dimitrijevic-Sreckovic, Vesna

    2016-01-01

    Objective To evaluate siMS score and siMS risk score, novel continuous metabolic syndrome scores as methods for quantification of metabolic status and risk. Materials and Methods Developed siMS score was calculated using formula: siMS score = 2*Waist/Height + Gly/5.6 + Tg/1.7 + TAsystolic/130?HDL/1.02 or 1.28 (for male or female subjects, respectively). siMS risk score was calculated using formula: siMS risk score = siMS score * age/45 or 50 (for male or female subjects, respectively) * famil...

  1. Diagnostic accuracy of guys Hospital stroke score (allen score) in acute supratentorial thrombotic/haemorrhagic stroke

    International Nuclear Information System (INIS)

    Zulfiqar, A.; Toori, K. U.; Khan, S. S.; Hamza, M. I. M.; Zaman, S. U.

    2006-01-01

    A consecutive series of 103 patients, 58% male with mean age of 62 year (range 40-75 years), admitted with supratentorial stroke in our teaching hospital were studied. All patients had Computer Tomography scan brain done after clinical evaluation and application of Allen stroke score. Computer Tomography Scan confirmed thrombotic stroke in 55 (53%) patients and haemorrhagic stroke in 48 (47%) patients. Out of the 55 patients with definitive thrombotic stroke on Computer Tomography Scan, Allen stroke score suggested infarction in 67%, haemorrhage in 6% and remained inconclusive in 27% of cases. In 48 patients with definitive haemorrhagic stroke on Computer Tomography Scan, Allen stroke score suggested haemorrhage in 60%, infarction in 11% and remained inconclusive in 29% of cases. The overall accuracy of Allen stroke score was 66%. (author)

  2. Evaluation of modified Alvarado scoring system and RIPASA scoring system as diagnostic tools of acute appendicitis.

    Science.gov (United States)

    Shuaib, Abdullah; Shuaib, Ali; Fakhra, Zainab; Marafi, Bader; Alsharaf, Khalid; Behbehani, Abdullah

    2017-01-01

    Acute appendicitis is the most common surgical condition presented in emergency departments worldwide. Clinical scoring systems, such as the Alvarado and modified Alvarado scoring systems, were developed with the goal of reducing the negative appendectomy rate to 5%-10%. The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) scoring system was established in 2008 specifically for Asian populations. The aim of this study was to compare the modified Alvarado with the RIPASA scoring system in Kuwait population. This study included 180 patients who underwent appendectomies and were documented as having "acute appendicitis" or "abdominal pain" in the operating theatre logbook (unit B) from November 2014 to March 2016. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), diagnostic accuracy, predicted negative appendectomy and receiver operating characteristic (ROC) curve of the modified Alvarado and RIPASA scoring systems were derived using SPSS statistical software. A total of 136 patients were included in this study according to our criteria. The cut-off threshold point of the modified Alvarado score was set at 7.0, which yielded a sensitivity of 82.8% and a specificity of 56%. The PPV was 89.3% and the NPV was 42.4%. The cut-off threshold point of the RIPASA score was set at 7.5, which yielded a 94.5% sensitivity and an 88% specificity. The PPV was 97.2% and the NPV was 78.5%. The predicted negative appendectomy rates were 10.7% and 2.2% for the modified Alvarado and RIPASA scoring systems, respectively. The negative appendectomy rate decreased significantly, from 18.4% to 10.7% for the modified Alvarado, and to 2.2% for the RIPASA scoring system, which was a significant difference (PAsian populations. It consists of 14 clinical parameters that can be obtained from a good patient history, clinical examination and laboratory investigations. The RIPASA scoring system is more accurate and specific than the modified Alvarado

  3. A Novel Scoring System Approach to Assess Patients with Lyme Disease (Nutech Functional Score)

    OpenAIRE

    Geeta Shroff; Petra Hopf-Seidel

    2018-01-01

    Introduction: A bacterial infection by Borrelia burgdorferi referred to as Lyme disease (LD) or borreliosis is transmitted mostly by a bite of the tick Ixodes scapularis in the USA and Ixodes ricinus in Europe. Various tests are used for the diagnosis of LD, but their results are often unreliable. We compiled a list of clinically visible and patient-reported symptoms that are associated with LD. Based on this list, we developed a novel scoring system. Methodology: Nutech functional Score (NF...

  4. External validation of the NOBLADS score, a risk scoring system for severe acute lower gastrointestinal bleeding.

    Directory of Open Access Journals (Sweden)

    Tomonori Aoki

    Full Text Available We aimed to evaluate the generalizability of NOBLADS, a severe lower gastrointestinal bleeding (LGIB prediction model which we had previously derived when working at a different institution, using an external validation cohort. NOBLADS comprises the following factors: non-steroidal anti-inflammatory drug use, no diarrhea, no abdominal tenderness, blood pressure ≤ 100 mmHg, antiplatelet drug use, albumin < 3.0 g/dL, disease score ≥ 2, and syncope.We retrospectively analyzed 511 patients emergently hospitalized for acute LGIB at the University of Tokyo Hospital, from January 2009 to August 2016. The areas under the receiver operating characteristic curves (ROCs-AUCs for severe bleeding (continuous and/or recurrent bleeding were compared between the original derivation cohort and the external validation cohort.Severe LGIB occurred in 44% of patients. Several clinical factors were significantly different between the external and derivation cohorts (p < 0.05, including background, laboratory data, NOBLADS scores, and diagnosis. The NOBLADS score predicted the severity of LGIB with an AUC value of 0.74 in the external validation cohort and one of 0.77 in the derivation cohort. In the external validation cohort, the score predicted the risk for blood transfusion need (AUC, 0.71, but was not adequate for predicting intervention need (AUC, 0.54. The in-hospital mortality rate was higher in patients with a score ≥ 5 than in those with a score < 5 (AUC, 0.83.Although the external validation cohort clinically differed from the derivation cohort in many ways, we confirmed the moderately high generalizability of NOBLADS, a clinical risk score for severe LGIB. Appropriate triage using this score may support early decision-making in various hospitals.

  5. The Veterans Affairs Cardiac Risk Score: Recalibrating the Atherosclerotic Cardiovascular Disease Score for Applied Use.

    Science.gov (United States)

    Sussman, Jeremy B; Wiitala, Wyndy L; Zawistowski, Matthew; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A

    2017-09-01

    Accurately estimating cardiovascular risk is fundamental to good decision-making in cardiovascular disease (CVD) prevention, but risk scores developed in one population often perform poorly in dissimilar populations. We sought to examine whether a large integrated health system can use their electronic health data to better predict individual patients' risk of developing CVD. We created a cohort using all patients ages 45-80 who used Department of Veterans Affairs (VA) ambulatory care services in 2006 with no history of CVD, heart failure, or loop diuretics. Our outcome variable was new-onset CVD in 2007-2011. We then developed a series of recalibrated scores, including a fully refit "VA Risk Score-CVD (VARS-CVD)." We tested the different scores using standard measures of prediction quality. For the 1,512,092 patients in the study, the Atherosclerotic cardiovascular disease risk score had similar discrimination as the VARS-CVD (c-statistic of 0.66 in men and 0.73 in women), but the Atherosclerotic cardiovascular disease model had poor calibration, predicting 63% more events than observed. Calibration was excellent in the fully recalibrated VARS-CVD tool, but simpler techniques tested proved less reliable. We found that local electronic health record data can be used to estimate CVD better than an established risk score based on research populations. Recalibration improved estimates dramatically, and the type of recalibration was important. Such tools can also easily be integrated into health system's electronic health record and can be more readily updated.

  6. Pediatric siMS score: A new, simple and accurate continuous metabolic syndrome score for everyday use in pediatrics.

    Science.gov (United States)

    Vukovic, Rade; Milenkovic, Tatjana; Stojan, George; Vukovic, Ana; Mitrovic, Katarina; Todorovic, Sladjana; Soldatovic, Ivan

    2017-01-01

    The dichotomous nature of the current definition of metabolic syndrome (MS) in youth results in loss of information. On the other hand, the calculation of continuous MS scores using standardized residuals in linear regression (Z scores) or factor scores of principal component analysis (PCA) is highly impractical for clinical use. Recently, a novel, easily calculated continuous MS score called siMS score was developed based on the IDF MS criteria for the adult population. To develop a Pediatric siMS score (PsiMS), a modified continuous MS score for use in the obese youth, based on the original siMS score, while keeping the score as simple as possible and retaining high correlation with more complex scores. The database consisted of clinical data on 153 obese (BMI ≥95th percentile) children and adolescents. Continuous MS scores were calculated using Z scores and PCA, as well as the original siMS score. Four variants of PsiMS score were developed in accordance with IDF criteria for MS in youth and correlation of these scores with PCA and Z score derived MS continuous scores was assessed. PsiMS score calculated using formula: (2xWaist/Height) + (Glucose(mmol/l)/5.6) + (triglycerides(mmol/l)/1.7) + (Systolic BP/130)-(HDL(mmol/l)/1.02) showed the highest correlation with most of the complex continuous scores (0.792-0.901). The original siMS score also showed high correlation with continuous MS scores. PsiMS score represents a practical and accurate score for the evaluation of MS in the obese youth. The original siMS score should be used when evaluating large cohorts consisting of both adults and children.

  7. Scoring function to predict solubility mutagenesis

    Directory of Open Access Journals (Sweden)

    Deutsch Christopher

    2010-10-01

    Full Text Available Abstract Background Mutagenesis is commonly used to engineer proteins with desirable properties not present in the wild type (WT protein, such as increased or decreased stability, reactivity, or solubility. Experimentalists often have to choose a small subset of mutations from a large number of candidates to obtain the desired change, and computational techniques are invaluable to make the choices. While several such methods have been proposed to predict stability and reactivity mutagenesis, solubility has not received much attention. Results We use concepts from computational geometry to define a three body scoring function that predicts the change in protein solubility due to mutations. The scoring function captures both sequence and structure information. By exploring the literature, we have assembled a substantial database of 137 single- and multiple-point solubility mutations. Our database is the largest such collection with structural information known so far. We optimize the scoring function using linear programming (LP methods to derive its weights based on training. Starting with default values of 1, we find weights in the range [0,2] so that predictions of increase or decrease in solubility are optimized. We compare the LP method to the standard machine learning techniques of support vector machines (SVM and the Lasso. Using statistics for leave-one-out (LOO, 10-fold, and 3-fold cross validations (CV for training and prediction, we demonstrate that the LP method performs the best overall. For the LOOCV, the LP method has an overall accuracy of 81%. Availability Executables of programs, tables of weights, and datasets of mutants are available from the following web page: http://www.wsu.edu/~kbala/OptSolMut.html.

  8. Best waveform score for diagnosing keratoconus

    Directory of Open Access Journals (Sweden)

    Allan Luz

    2013-12-01

    Full Text Available PURPOSE: To test whether corneal hysteresis (CH and corneal resistance factor (CRF can discriminate between keratoconus and normal eyes and to evaluate whether the averages of two consecutive measurements perform differently from the one with the best waveform score (WS for diagnosing keratoconus. METHODS: ORA measurements for one eye per individual were selected randomly from 53 normal patients and from 27 patients with keratoconus. Two groups were considered the average (CH-Avg, CRF-Avg and best waveform score (CH-WS, CRF-WS groups. The Mann-Whitney U-test was used to evaluate whether the variables had similar distributions in the Normal and Keratoconus groups. Receiver operating characteristics (ROC curves were calculated for each parameter to assess the efficacy for diagnosing keratoconus and the same obtained for each variable were compared pairwise using the Hanley-McNeil test. RESULTS: The CH-Avg, CRF-Avg, CH-WS and CRF-WS differed significantly between the normal and keratoconus groups (p<0.001. The areas under the ROC curve (AUROC for CH-Avg, CRF-Avg, CH-WS, and CRF-WS were 0.824, 0.873, 0.891, and 0.931, respectively. CH-WS and CRF-WS had significantly better AUROCs than CH-Avg and CRF-Avg, respectively (p=0.001 and 0.002. CONCLUSION: The analysis of the biomechanical properties of the cornea through the ORA method has proved to be an important aid in the diagnosis of keratoconus, regardless of the method used. The best waveform score (WS measurements were superior to the average of consecutive ORA measurements for diagnosing keratoconus.

  9. Setting pass scores for clinical skills assessment.

    Science.gov (United States)

    Liu, Min; Liu, Keh-Min

    2008-12-01

    In a clinical skills assessment, the decision to pass or fail an examinee should be based on the test content or on the examinees' performance. The process of deciding a pass score is known as setting a standard of the examination. This requires a properly selected panel of expert judges and a suitable standard setting method, which best fits the purpose of the examination. Six standard setting methods that are often used in clinical skills assessment are described to provide an overview of the standard setting process.

  10. Setting Pass Scores for Clinical Skills Assessment

    Directory of Open Access Journals (Sweden)

    Min Liu

    2008-12-01

    Full Text Available In a clinical skills assessment, the decision to pass or fail an examinee should be based on the test content or on the examinees' performance. The process of deciding a pass score is known as setting a standard of the examination. This requires a properly selected panel of expert judges and a suitable standard setting method, which best fits the purpose of the examination. Six standard setting methods that are often used in clinical skills assessment are described to provide an overview of the standard setting process.

  11. Sway Area and Velocity Correlated With MobileMat Balance Error Scoring System (BESS) Scores.

    Science.gov (United States)

    Caccese, Jaclyn B; Buckley, Thomas A; Kaminski, Thomas W

    2016-08-01

    The Balance Error Scoring System (BESS) is often used for sport-related concussion balance assessment. However, moderate intratester and intertester reliability may cause low initial sensitivity, suggesting that a more objective balance assessment method is needed. The MobileMat BESS was designed for objective BESS scoring, but the outcome measures must be validated with reliable balance measures. Thus, the purpose of this investigation was to compare MobileMat BESS scores to linear and nonlinear measures of balance. Eighty-eight healthy collegiate student-athletes (age: 20.0 ± 1.4 y, height: 177.7 ± 10.7 cm, mass: 74.8 ± 13.7 kg) completed the MobileMat BESS. MobileMat BESS scores were compared with 95% area, sway velocity, approximate entropy, and sample entropy. MobileMat BESS scores were significantly correlated with 95% area for single-leg (r = .332) and tandem firm (r = .474), and double-leg foam (r = .660); and with sway velocity for single-leg (r = .406) and tandem firm (r = .601), and double-leg (r = .575) and single-leg foam (r = .434). MobileMat BESS scores were not correlated with approximate or sample entropy. MobileMat BESS scores were low to moderately correlated with linear measures, suggesting the ability to identify changes in the center of mass-center of pressure relationship, but not higher-order processing associated with nonlinear measures. These results suggest that the MobileMat BESS may be a clinically-useful tool that provides objective linear balance measures.

  12. Ripasa score: a new diagnostic score for diagnosis of acute appendicitis

    International Nuclear Information System (INIS)

    Butt, M.Q.

    2014-01-01

    Objective: To determine the usefulness of RIPASA score for the diagnosis of acute appendicitis using histopathology as a gold standard. Study Design: Cross-sectional study. Place and Duration of Study: Department of General Surgery, Combined Military Hospital, Kohat, from September 2011 to March 2012. Methodology: A total of 267 patients were included in this study. RIPASA score was assessed. The diagnosis of appendicitis was made clinically aided by routine sonography of abdomen. After appendicectomies, resected appendices were sent for histopathological examination. The 15 parameters and the scores generated were age (less than 40 years = 1 point; greater than 40 years = 0.5 point), gender (male = 1 point; female = 0.5 point), Right Iliac Fossa (RIF) pain (0.5 point), migration of pain to RIF (0.5 point), nausea and vomiting (1 point), anorexia (1 point), duration of symptoms (less than 48 hours = 1 point; more than 48 hours = 0.5 point), RIF tenderness (1 point), guarding (2 points), rebound tenderness (1 point), Rovsing's sign (2 points), fever (1 point), raised white cell count (1 point), negative urinalysis (1 point) and foreign national registration identity card (1 point). The optimal cut-off threshold score from the ROC was 7.5. Sensitivity analysis was done. Results: Out of 267 patients, 156 (58.4%) were male while remaining 111 patients (41.6%) were female with mean age of 23.5 +- 9.1 years. Sensitivity of RIPASA score was 96.7%, specificity 93.0%, diagnostic accuracy was 95.1%, positive predictive value was 94.8% and negative predictive value was 95.54%. Conclusion: RIPASA score at a cut-off total score of 7.5 was a useful tool to diagnose appendicitis, in equivocal cases of pain. (author)

  13. How is the injury severity scored? a brief review of scoring systems

    Directory of Open Access Journals (Sweden)

    Mohsen Ebrahimi

    2015-06-01

    Full Text Available The management of injured patients is a critical issue in pre-hospital and emergency departments. Trauma victims are usually young and the injuries may lead to mortality or severe morbidities. The severity of injury can be estimated by observing the anatomic and physiologic evidences. Scoring systems are used to present a scale of describing the severity of the injuries in the victims.We reviewed the evidences of famous scoring systems, the history of their development, applications and their evolutions. We searched electronic database PubMed and Google scholar with keywords: (trauma OR injury AND (severity OR intensity AND (score OR scale.In this paper, we are going to present a definition of scoring systems and discuss the Abbreviated Injury Scale (AIS and Injury Severity Score (ISS, the most acceptable systems, their applications and their advantages and limitations.Several injury-scoring methods have been introduced. Each method has specific features, advantages and disadvantages. The AIS is an anatomical-based scoring system, which provides a standard numerical scale of ranking and comparing injuries. The ISS was established as a platform for trauma data registry. ISS is also an anatomically-based ordinal scale, with a range of 1-75. Several databases and studies are formed based on ISS and are available for trauma management research.Although the ISS is not perfect, it is established as the basic platform of health services and public health researches. The ISS registering system can provide many opportunities for the development of efficient data recording and statistical analyzing models.

  14. High-Throughput Scoring of Seed Germination.

    Science.gov (United States)

    Ligterink, Wilco; Hilhorst, Henk W M

    2017-01-01

    High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very informative as it lacks information about start, rate, and uniformity of germination, which are highly indicative of such traits as dormancy, stress tolerance, and seed longevity. The calculation of cumulative germination curves requires information about germination percentage at various time points. We developed the GERMINATOR package: a simple, highly cost-efficient, and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The GERMINATOR package contains three modules: (I) design of experimental setup with various options to replicate and randomize samples; (II) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (III) curve fitting of cumulative germination data and the extraction, recap, and visualization of the various germination parameters. GERMINATOR is a freely available package that allows the monitoring and analysis of several thousands of germination tests, several times a day by a single person.

  15. Development of a severity score for CRPS.

    Science.gov (United States)

    Harden, R Norman; Bruehl, Stephen; Perez, Roberto S G M; Birklein, Frank; Marinus, Johan; Maihofner, Christian; Lubenow, Timothy; Buvanendran, Asokumar; Mackey, Sean; Graciosa, Joseph; Mogilevski, Mila; Ramsden, Christopher; Schlereth, Tanja; Chont, Melissa; Vatine, Jean-Jacques

    2010-12-01

    The clinical diagnosis of Complex Regional Pain Syndrome (CRPS) is a dichotomous (yes/no) categorization necessary for clinical decision-making. However, such dichotomous diagnostic categories do not convey an individual's subtle and temporal gradations in severity of the condition, and have poor statistical power when used as an outcome measure in research. This study evaluated the validity and potential utility of a continuous type score to index severity of CRPS. Psychometric and medical evaluations were conducted in 114 CRPS patients and 41 non-CRPS neuropathic pain patients. Based on the presence/absence of 17 clinically-assessed signs and symptoms of CRPS, an overall CRPS Severity Score (CSS) was derived. The CSS discriminated well between CRPS and non-CRPS patients (pCRPS diagnoses using both IASP diagnostic criteria (Eta=0.69) and proposed revised criteria (Eta=0.77-0.88). Higher CSS was associated with significantly higher clinical pain intensity, distress, and functional impairments, as well as greater bilateral temperature asymmetry and thermal perception abnormalities (p'sCRPS, and support its validity as an index of CRPS severity. Its utility as an outcome measure in research studies is also suggested, with potential statistical advantages over dichotomous diagnostic criteria. Copyright © 2010. Published by Elsevier B.V.

  16. Knee Injury and Osteoarthritis Outcome Score (KOOS)

    DEFF Research Database (Denmark)

    Collins, N J; Prinsen, C A C; Christensen, R

    2016-01-01

    OBJECTIVE: To conduct a systematic review and meta-analysis to synthesize evidence regarding measurement properties of the Knee injury and Osteoarthritis Outcome Score (KOOS). DESIGN: A comprehensive literature search identified 37 eligible papers evaluating KOOS measurement properties in partici......OBJECTIVE: To conduct a systematic review and meta-analysis to synthesize evidence regarding measurement properties of the Knee injury and Osteoarthritis Outcome Score (KOOS). DESIGN: A comprehensive literature search identified 37 eligible papers evaluating KOOS measurement properties...... in participants with knee injuries and/or osteoarthritis (OA). Methodological quality was evaluated using the COSMIN checklist. Where possible, meta-analysis of extracted data was conducted for all studies and stratified by age and knee condition; otherwise narrative synthesis was performed. RESULTS: KOOS has...... adequate internal consistency, test-retest reliability and construct validity in young and old adults with knee injuries and/or OA. The ADL subscale has better content validity for older patients and Sport/Rec for younger patients with knee injuries, while the Pain subscale is more relevant for painful...

  17. Literature in focus: How to Score

    CERN Document Server

    2006-01-01

    What is the perfect way to take a free kick? Which players are under more stress: attackers, midfielders or defenders? How do we know when a ball has crossed the goal-line? And how can teams win a penalty shoot out? From international team formations to the psychology of the pitch and the changing room... The World Cup might be a time to forget about physics for a while, but not for Ken Bray, a theoretical physicist and visiting Fellow of the Sport and Exercise Science Group at the University of Bath who specializes in the science of football. Dr Bray will visit CERN to talk exclusively about his book: How to Score. As a well-seasoned speaker and advisor to professional football teams, this presentation promises to be a fascinating and timely insight into the secret science that lies behind 'the beautiful game'. If you play or just watch football, don't miss this event! Ken Bray - How to Score Thursday 22 June at 3 p.m. (earlier than usual to avoid clashes with World Cup matches!) Central Library reading ...

  18. 24 CFR 902.45 - Management operations scoring and thresholds.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Management operations scoring and... URBAN DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Indicator #3: Management Operations § 902.45 Management operations scoring and thresholds. (a) Scoring. The Management Operations Indicator score provides...

  19. Conditional Standard Errors of Measurement for Scale Scores.

    Science.gov (United States)

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  20. Validating the Interpretations and Uses of Test Scores

    Science.gov (United States)

    Kane, Michael T.

    2013-01-01

    To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…

  1. [The diagnostic and the exclusion scores for pulmonary embolism].

    Science.gov (United States)

    Junod, A

    2015-05-27

    Several clinical scores for the diagnosis of pulmonary embolism (PE) have been published. The most popular ones are the Wells score and the revised Geneva score; simplified versions exist for these two scores; they have been validated. Both scores have common properties, but there is a major difference for the Wells score, namely the inclusion of a feature based on clinical judgment. These two scores in combination with D-dimers measurement have been used to rule out PE. An important improvement in this process has recently taken place with the use of an adjustable, age-dependent threshold for DD for patients over 50 years.

  2. Nursing Activities Score and Acute Kidney Injury.

    Science.gov (United States)

    Coelho, Filipe Utuari de Andrade; Watanabe, Mirian; Fonseca, Cassiane Dezoti da; Padilha, Katia Grillo; Vattimo, Maria de Fátima Fernandes

    2017-01-01

    to evaluate the nursing workload in intensive care patients with acute kidney injury (AKI). A quantitative study, conducted in an intensive care unit, from April to August of 2015. The Nursing Activities Score (NAS) and Kidney Disease Improving Global Outcomes (KDIGO) were used to measure nursing workload and to classify the stage of AKI, respectively. A total of 190 patients were included. Patients who developed AKI (44.2%) had higher NAS when compared to those without AKI (43.7% vs 40.7%), p <0.001. Patients with stage 1, 2 and 3 AKI showed higher NAS than those without AKI. A relationship was identified between stage 2 and 3 with those without AKI (p = 0.002 and p <0.001). The NAS was associated with the presence of AKI, the score increased with the progression of the stages, and it was associated with AKI, stage 2 and 3. avaliar a carga de trabalho de enfermagem em pacientes de terapia intensiva com lesão renal aguda (LRA). estudo quantitativo, em Unidade de Terapia Intensiva, no período de abril a agosto de 2015. O Nursing Activities Score (NAS) e o Kidney Disease Improving Global Outcomes (KDIGO) foram utilizados para medir a carga de trabalho de enfermagem e classificar o estágio da LRA, respectivamente. foram incluídos 190 pacientes. Os pacientes que desenvolveram LRA (44,2%) possuíam NAS superiores quando comparados aos sem LRA (43,7% vs 40,7%), p<0,001. Os pacientes com LRA nos estágios 1, 2 e 3 de LRA demonstraram NAS superiores aos sem LRA, houve relação entre os estágios 2 e 3 com os sem LRA, p=0,002 e p<0,001. o NAS apresentou associação com a existência de LRA, visto que seu valor aumenta com a progressão dos estágios, tendo associação com os estágios 2 e 3 de LRA.

  3. Reproducibility of scoring emphysema by HRCT

    International Nuclear Information System (INIS)

    Malinen, A.; Partanen, K.; Rytkoenen, H.; Vanninen, R.; Erkinjuntti-Pekkanen, R.

    2002-01-01

    Purpose: We evaluated the reproducibility of three visual scoring methods of emphysema and compared these methods with pulmonary function tests (VC, DLCO, FEV1 and FEV%) among farmer's lung patients and farmers. Material and Methods: Three radiologists examined high-resolution CT images of farmer's lung patients and their matched controls (n=70) for chronic interstitial lung diseases. Intraobserver reproducibility and interobserver variability were assessed for three methods: severity, Sanders' (extent) and Sakai. Pulmonary function tests as spirometry and diffusing capacity were measured. Results: Intraobserver -values for all three methods were good (0.51-0.74). Interobserver varied from 0.35 to 0.72. The Sanders' and the severity methods correlated strongly with pulmonary function tests, especially DLCO and FEV1. Conclusion: The Sanders' method proved to be reliable in evaluating emphysema, in terms of good consistency of interpretation and good correlation with pulmonary function tests

  4. Reproducibility of scoring emphysema by HRCT

    Energy Technology Data Exchange (ETDEWEB)

    Malinen, A.; Partanen, K.; Rytkoenen, H.; Vanninen, R. [Kuopio Univ. Hospital (Finland). Dept. of Clinical Radiology; Erkinjuntti-Pekkanen, R. [Kuopio Univ. Hospital (Finland). Dept. of Pulmonary Diseases

    2002-04-01

    Purpose: We evaluated the reproducibility of three visual scoring methods of emphysema and compared these methods with pulmonary function tests (VC, DLCO, FEV1 and FEV%) among farmer's lung patients and farmers. Material and Methods: Three radiologists examined high-resolution CT images of farmer's lung patients and their matched controls (n=70) for chronic interstitial lung diseases. Intraobserver reproducibility and interobserver variability were assessed for three methods: severity, Sanders' (extent) and Sakai. Pulmonary function tests as spirometry and diffusing capacity were measured. Results: Intraobserver -values for all three methods were good (0.51-0.74). Interobserver varied from 0.35 to 0.72. The Sanders' and the severity methods correlated strongly with pulmonary function tests, especially DLCO and FEV1. Conclusion: The Sanders' method proved to be reliable in evaluating emphysema, in terms of good consistency of interpretation and good correlation with pulmonary function tests.

  5. The Rectal Cancer Female Sexuality Score

    DEFF Research Database (Denmark)

    Thyø, Anne; Emmertsen, Katrine J; Laurberg, Søren

    2018-01-01

    BACKGROUND: Sexual dysfunction and impaired quality of life is a potential side effect to rectal cancer treatment. OBJECTIVE: The objective of this study was to develop and validate a simple scoring system intended to evaluate sexual function in women treated for rectal cancer. DESIGN......: This is a population-based cross-sectional study. SETTINGS: Female patients diagnosed with rectal cancer between 2001 and 2014 were identified by using the Danish Colorectal Cancer Group's database. Participants filled in the validated Sexual Function Vaginal Changes questionnaire. Women declared to be sexually active...... in the validation group. PATIENTS: Female patients with rectal cancer above the age of 18 who underwent abdominoperineal resection, Hartmann procedure, or total/partial mesorectal excision were selected. MAIN OUTCOME MEASURES: The primary outcome measured was the quality of life that was negatively affected because...

  6. ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS

    Directory of Open Access Journals (Sweden)

    Pablo Rogers

    2015-01-01

    Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.

  7. Do efficiency scores depend on input mix?

    DEFF Research Database (Denmark)

    Asmild, Mette; Hougaard, Jens Leth; Kronborg, Dorte

    2013-01-01

    In this paper we examine the possibility of using the standard Kruskal-Wallis (KW) rank test in order to evaluate whether the distribution of efficiency scores resulting from Data Envelopment Analysis (DEA) is independent of the input (or output) mix of the observations. Since the DEA frontier...... is estimated, many standard assumptions for evaluating the KW test statistic are violated. Therefore, we propose to explore its statistical properties by the use of simulation studies. The simulations are performed conditional on the observed input mixes. The method, unlike existing approaches...... the assumption of mix independence is rejected the implication is that it, for example, is impossible to determine whether machine intensive project are more or less efficient than labor intensive projects....

  8. 'Mechanical restraint-confounders, risk, alliance score'

    DEFF Research Database (Denmark)

    Deichmann Nielsen, Lea; Bech, Per; Hounsgaard, Lise

    2017-01-01

    . AIM: To clinically validate a new, structured short-term risk assessment instrument called the Mechanical Restraint-Confounders, Risk, Alliance Score (MR-CRAS), with the intended purpose of supporting the clinicians' observation and assessment of the patient's readiness to be released from mechanical...... restraint. METHODS: The content and layout of MR-CRAS and its user manual were evaluated using face validation by forensic mental health clinicians, content validation by an expert panel, and pilot testing within two, closed forensic mental health inpatient units. RESULTS: The three sub-scales (Confounders......, Risk, and a parameter of Alliance) showed excellent content validity. The clinical validations also showed that MR-CRAS was perceived and experienced as a comprehensible, relevant, comprehensive, and useable risk assessment instrument. CONCLUSIONS: MR-CRAS contains 18 clinically valid items...

  9. SOS score: an optimized score to screen acute stroke patients for obstructive sleep apnea.

    Science.gov (United States)

    Camilo, Millene R; Sander, Heidi H; Eckeli, Alan L; Fernandes, Regina M F; Dos Santos-Pontelli, Taiza E G; Leite, Joao P; Pontes-Neto, Octavio M

    2014-09-01

    Obstructive sleep apnea (OSA) is frequent in acute stroke patients, and has been associated with higher mortality and worse prognosis. Polysomnography (PSG) is the gold standard diagnostic method for OSA, but it is impracticable as a routine for all acute stroke patients. We evaluated the accuracy of two OSA screening tools, the Berlin Questionnaire (BQ), and the Epworth Sleepiness Scale (ESS) when administered to relatives of acute stroke patients; we also compared these tools against a combined screening score (SOS score). Ischemic stroke patients were submitted to a full PSG at the first night after onset of symptoms. OSA severity was measured by apnea-hypopnea index (AHI). BQ and ESS were administered to relatives of stroke patients before the PSG and compared to SOS score for accuracy and C-statistics. We prospectively studied 39 patients. OSA (AHI ≥10/h) was present in 76.9%. The SOS score [area under the curve (AUC): 0.812; P = 0.005] and ESS (AUC: 0.789; P = 0.009) had good predictive value for OSA. The SOS score was the only tool with significant predictive value (AUC: 0.686; P = 0.048) for severe OSA (AHI ≥30/h), when compared to ESS (P = 0.119) and BQ (P = 0.191). The threshold of SOS ≤10 showed high sensitivity (90%) and negative predictive value (96.2%) for OSA; SOS ≥20 showed high specificity (100%) and positive predictive value (92.5%) for severe OSA. The SOS score administered to relatives of stroke patients is a useful tool to screen for OSA and may decrease the need for PSG in acute stroke setting. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Prediction of true test scores from observed item scores and ancillary data.

    Science.gov (United States)

    Haberman, Shelby J; Yao, Lili; Sinharay, Sandip

    2015-05-01

    In many educational tests which involve constructed responses, a traditional test score is obtained by adding together item scores obtained through holistic scoring by trained human raters. For example, this practice was used until 2008 in the case of GRE(®) General Analytical Writing and until 2009 in the case of TOEFL(®) iBT Writing. With use of natural language processing, it is possible to obtain additional information concerning item responses from computer programs such as e-rater(®). In addition, available information relevant to examinee performance may include scores on related tests. We suggest application of standard results from classical test theory to the available data to obtain best linear predictors of true traditional test scores. In performing such analysis, we require estimation of variances and covariances of measurement errors, a task which can be quite difficult in the case of tests with limited numbers of items and with multiple measurements per item. As a consequence, a new estimation method is suggested based on samples of examinees who have taken an assessment more than once. Such samples are typically not random samples of the general population of examinees, so that we apply statistical adjustment methods to obtain the needed estimated variances and covariances of measurement errors. To examine practical implications of the suggested methods of analysis, applications are made to GRE General Analytical Writing and TOEFL iBT Writing. Results obtained indicate that substantial improvements are possible both in terms of reliability of scoring and in terms of assessment reliability. © 2015 The British Psychological Society.

  11. Coronary artery calcium scoring in myocardial infarction

    International Nuclear Information System (INIS)

    Beslic, S.; Dalagija, F.

    2005-01-01

    Background. The aim of this study was to evaluate coronary artery calcium scoring and the assessment of the risk factors in patients with myocardial infarction (MI). Methods. During the period of three years, 27 patients with MI were analyzed. The average age of patients was 66.1 years (46 to 81). Coronary arteries calcium was evaluated by multi row detector computed tomography (MTDC) S omatom Volume Zoom Siemens , and, retrospectively by ECG gating data acquisition. Semi automated calcium quantification to calculate Agatston calcium score (CS) was performed with 4 x 2.5 mm collimation, using 130 ml of contrast medium, injected with an automatic injector, with the flow rate of 4 ml/sec. The delay time was determined empirically. At the same time several risk factors were evaluated. Results. Out of 27 patients with MI, 3 (11.1%) patients had low CS (10- 100), 5 (18.5%) moderate CS (101- 499), and 19 (70.4%) patients high CS (>500). Of risk factors, smoking was confirmed in 17 (63.0%), high blood pressure (HTA) in 10 (57.0%), diabetes mellitus in 7 (25.9%), positive family history in 5 (18.5%), pathological lipids in 5 (18.5%), alcohol abuse in 4 (1.8%) patients. Six (22.2%) patients had symptoms of angina pectoris. Conclusions. The research showed high correlation of MI and high CS (>500). Smoking, HTA, diabetes mellitus, positive family history and hypercholesterolemia are significant risk factors. Symptoms are relatively poor in large number of patients. (author)

  12. Prediction of IOI-HA Scores Using Speech Reception Thresholds and Speech Discrimination Scores in Quiet

    DEFF Research Database (Denmark)

    Brännström, K Jonas; Lantz, Johannes; Nielsen, Lars Holme

    2014-01-01

    ), and speech discrimination scores (SDSs) in quiet or in noise are common assessments made prior to hearing aid (HA) fittings. It is not known whether SRT and SDS in quiet relate to HA outcome measured with the International Outcome Inventory for Hearing Aids (IOI-HA). PURPOSE: The aim of the present study...... COLLECTION AND ANALYSIS: The psychometric properties were evaluated and compared to previous studies using the IOI-HA. The associations and differences between the outcome scores and a number of descriptive variables (age, gender, fitted monaurally/binaurally with HA, first-time/experienced HA users, years...

  13. Autosomal dominant distal myopathy: Linkage to chromosome 14

    Energy Technology Data Exchange (ETDEWEB)

    Laing, N.G.; Laing, B.A.; Wilton, S.D.; Dorosz, S.; Mastaglia, F.L.; Kakulas, B.A. [Australian Neuromuscular Research Institute, Perth (Australia); Robbins, P.; Meredith, C.; Honeyman, K.; Kozman, H.

    1995-02-01

    We have studied a family segregating a form of autosomal dominant distal myopathy (MIM 160500) and containing nine living affected individuals. The myopathy in this family is closest in clinical phenotype to that first described by Gowers in 1902. A search for linkage was conducted using microsatellite, VNTR, and RFLP markers. In total, 92 markers on all 22 autosomes were run. Positive linkage was obtained with 14 of 15 markers tested on chromosome 14, with little indication of linkage elsewhere in the genome. Maximum two-point LOD scores of 2.60 at recombination fraction .00 were obtained for the markers MYH7 and D14S64 - the family structure precludes a two-point LOD score {ge} 3. Recombinations with D14S72 and D14S49 indicate that this distal myopathy locus, MPD1, should lie between these markers. A multipoint analysis assuming 100% penetrance and using the markers D14S72, D14S50, MYH7, D14S64, D14S54, and D14S49 gave a LOD score of exactly 3 at MYH7. Analysis at a penetrance of 80% gave a LOD score of 2.8 at this marker. This probable localization of a gene for distal myopathy, MPD1, on chromosome 14 should allow other investigators studying distal myopathy families to test this region for linkage in other types of the disease, to confirm linkage or to demonstrate the likely genetic heterogeneity. 24 refs., 3 figs., 1 tab.

  14. Influência do comprimento do sulco sobre a equação de infil­tração obtida pelo método dos dois pontos de Elliot & Walker Influence of furrow length on the infiltration equation obtained by the two point method of Elliot & Walker

    Directory of Open Access Journals (Sweden)

    S. Shahidian

    2010-01-01

    Full Text Available A Equação de infiltração do tipo Kostiakov pode ser determinada através do método dos dois pontos de Elliot & Walker. No entanto não existem normas para a selecção dos dois pontos. No presente trabalho foram utilizados diferentes pares de pontos ao longo de um sulco com 220m para calcular as correspon­dentes equações de infiltração. Os resultados demonstram que o expoente a da equação aumenta com o comprimento de sulco consi­derado, e com a distância até ao ponto do meio. Por sua vez, o coeficiente k tem um comportamento inverso, diminuindo com o aumento do comprimento do sulco. Assim, e por forma a que haja uma uniformidade de critérios, é recomendável que o primeiro pon­to esteja o mais próximo da meia distância entre a cabeceira e o segundo ponto.The Kostiakov infiltration equation can be established by the two point method of El­liot and Walker. Nevertheless, there are no indications as to how the two points should be selected. In this paper different pairs of points along a 220m long furrow are used to calculate the corresponding infiltration equation. The results indicate that the expo­nent of the equation increases with the length of the furrow, and with distance to the first point. The k of the equation behaves in the opposite direction, decreasing with an increase in the length of the furrow. It is recommended that the first measurement point should be located half way between the furrow inlet and the second measure­ment point.

  15. A Summary Score for the Framingham Heart Study Neuropsychological Battery.

    Science.gov (United States)

    Downer, Brian; Fardo, David W; Schmitt, Frederick A

    2015-10-01

    To calculate three summary scores of the Framingham Heart Study neuropsychological battery and determine which score best differentiates between subjects classified as having normal cognition, test-based impaired learning and memory, test-based multidomain impairment, and dementia. The final sample included 2,503 participants. Three summary scores were assessed: (a) composite score that provided equal weight to each subtest, (b) composite score that provided equal weight to each cognitive domain assessed by the neuropsychological battery, and (c) abbreviated score comprised of subtests for learning and memory. Receiver operating characteristic analysis was used to determine which summary score best differentiated between the four cognitive states. The summary score that provided equal weight to each subtest best differentiated between the four cognitive states. A summary score that provides equal weight to each subtest is an efficient way to utilize all of the cognitive data collected by a neuropsychological battery. © The Author(s) 2015.

  16. ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores

    Science.gov (United States)

    Allalouf, Avi

    2014-01-01

    The Quality Control (QC) Guidelines are intended to increase the efficiency, precision, and accuracy of the scoring, analysis, and reporting process of testing. The QC Guidelines focus on large-scale testing operations where multiple forms of tests are created for use on set dates. However, they may also be used for a wide variety of other testing…

  17. Renal dysfunction in liver cirrhosis and its correlation with Child-Pugh score and MELD score

    Science.gov (United States)

    Siregar, G. A.; Gurning, M.

    2018-03-01

    Renal dysfunction (RD) is a serious and common complication in a patient with liver cirrhosis. It provides a poor prognosis. The aim of our study was to evaluate the renal function in liver cirrhosis, also to determine the correlation with the graduation of liver disease assessed by Child-Pugh Score (CPS) and MELD score. This was a cross-sectional study included patients with liver cirrhosis admitted to Adam Malik Hospital Medan in June - August 2016. We divided them into two groups as not having renal dysfunction (serum creatinine SPSS 22.0 was used. Statistical methods used: Chi-square, Fisher exact, one way ANOVA, Kruskal Wallis test and Pearson coefficient of correlation. The level of significance was p<0.05. 55 patients with presented with renal dysfunction were 16 (29.1 %). There was statistically significant inverse correlation between GFR and CPS (r = -0.308), GFR and MELD score (r = -0.278). There was a statistically significant correlation between creatinine and MELD score (r = 0.359), creatinine and CPS (r = 0.382). The increase of the degree of liver damage is related to the increase of renal dysfunction.

  18. A Novel Scoring System Approach to Assess Patients with Lyme Disease (Nutech Functional Score).

    Science.gov (United States)

    Shroff, Geeta; Hopf-Seidel, Petra

    2018-01-01

    A bacterial infection by Borrelia burgdorferi referred to as Lyme disease (LD) or borreliosis is transmitted mostly by a bite of the tick Ixodes scapularis in the USA and Ixodes ricinus in Europe. Various tests are used for the diagnosis of LD, but their results are often unreliable. We compiled a list of clinically visible and patient-reported symptoms that are associated with LD. Based on this list, we developed a novel scoring system. Nutech functional Score (NFS), which is a 43 point positional (every symptom is subgraded and each alternative gets some points according to its position) and directional (moves in direction bad to good) scoring system that assesses the patient's condition. The grades of the scoring system have been converted into numeric values for conducting probability based studies. Each symptom is graded from 1 to 5 that runs in direction BAD → GOOD. NFS is a unique tool that can be used universally to assess the condition of patients with LD.

  19. A Novel Scoring System Approach to Assess Patients with Lyme Disease (Nutech Functional Score

    Directory of Open Access Journals (Sweden)

    Geeta Shroff

    2018-01-01

    Full Text Available Introduction: A bacterial infection by Borrelia burgdorferi referred to as Lyme disease (LD or borreliosis is transmitted mostly by a bite of the tick Ixodes scapularis in the USA and Ixodes ricinus in Europe. Various tests are used for the diagnosis of LD, but their results are often unreliable. We compiled a list of clinically visible and patient-reported symptoms that are associated with LD. Based on this list, we developed a novel scoring system. Methodology: Nutech functional Score (NFS, which is a 43 point positional (every symptom is subgraded and each alternative gets some points according to its position and directional (moves in direction bad to good scoring system that assesses the patient's condition. Results: The grades of the scoring system have been converted into numeric values for conducting probability based studies. Each symptom is graded from 1 to 5 that runs in direction BAD → GOOD. Conclusion: NFS is a unique tool that can be used universally to assess the condition of patients with LD.

  20. Evaluation of classifiers that score linear type traits and body condition score using common sires

    NARCIS (Netherlands)

    Veerkamp, R.F.; Gerritsen, C.L.M.; Koenen, E.P.C.; Hamoen, A.; Jong, de G.

    2002-01-01

    Subjective visual assessment of animals by classifiers is undertaken for several different traits in farm livestock, e.g., linear type traits, body condition score, or carcass conformation. One of the difficulties in assessment is the effect of an individual classifier. To ensure that classifiers

  1. Symptom scoring systems to diagnose distal polyneuropathy in diabetes : the Diabetic Neuropathy Symptom score

    NARCIS (Netherlands)

    Meijer, J.W.G.; Smit, A.J.; van Sonderen, E.; Groothoff, J.W.; Eisma, W.H.; Links, T.P.

    2002-01-01

    AIMS: To provide one of the diagnostic categories for distal diabetic polyneuro-pathy,several symptom scoring systems are available, which are often extensive andlack in validation. We validated a new four-item Diabetic Neuropathy Symptom (DNS) scorefor diagnosing distal diabetic polyneuropathy.

  2. A scoring system for ascertainment of incident stroke; the Risk Index Score (RISc).

    Science.gov (United States)

    Kass-Hout, T A; Moyé, L A; Smith, M A; Morgenstern, L B

    2006-01-01

    The main objective of this study was to develop and validate a computer-based statistical algorithm that could be translated into a simple scoring system in order to ascertain incident stroke cases using hospital admission medical records data. The Risk Index Score (RISc) algorithm was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christi (BASIC) project, 2000. The validity of RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment by physician and/or abstractor review of hospital admission records. RISc was developed on 1718 randomly selected patients (training set) and then statistically validated on an independent sample of 858 patients (validation set). A multivariable logistic model was used to develop RISc and subsequently evaluated by goodness-of-fit and receiver operating characteristic (ROC) analyses. The higher the value of RISc, the higher the patient's risk of potential stroke. The study showed RISc was well calibrated and discriminated those who had potential stroke from those that did not on initial screening. In this study we developed and validated a rapid, easy, efficient, and accurate method to ascertain incident stroke cases from routine hospital admission records for epidemiologic investigations. Validation of this scoring system was achieved statistically; however, clinical validation in a community hospital setting is warranted.

  3. Validity of GRE General Test scores and TOEFL scores for graduate admission to a technical university in Western Europe

    Science.gov (United States)

    Zimmermann, Judith; von Davier, Alina A.; Buhmann, Joachim M.; Heinimann, Hans R.

    2018-01-01

    Graduate admission has become a critical process in tertiary education, whereby selecting valid admissions instruments is key. This study assessed the validity of Graduate Record Examination (GRE) General Test scores for admission to Master's programmes at a technical university in Europe. We investigated the indicative value of GRE scores for the Master's programme grade point average (GGPA) with and without the addition of the undergraduate GPA (UGPA) and the TOEFL score, and of GRE scores for study completion and Master's thesis performance. GRE scores explained 20% of the variation in the GGPA, while additional 7% were explained by the TOEFL score and 3% by the UGPA. Contrary to common belief, the GRE quantitative reasoning score showed only little explanatory power. GRE scores were also weakly related to study progress but not to thesis performance. Nevertheless, GRE and TOEFL scores were found to be sensible admissions instruments. Rigorous methodology was used to obtain highly reliable results.

  4. Application of the FOUR Score in Intracerebral Hemorrhage Risk Analysis.

    Science.gov (United States)

    Braksick, Sherri A; Hemphill, J Claude; Mandrekar, Jay; Wijdicks, Eelco F M; Fugate, Jennifer E

    2018-06-01

    The Full Outline of Unresponsiveness (FOUR) Score is a validated scale describing the essentials of a coma examination, including motor response, eye opening and eye movements, brainstem reflexes, and respiratory pattern. We incorporated the FOUR Score into the existing ICH Score and evaluated its accuracy of risk assessment in spontaneous intracerebral hemorrhage (ICH). Consecutive patients admitted to our institution from 2009 to 2012 with spontaneous ICH were reviewed. The ICH Score was calculated using patient age, hemorrhage location, hemorrhage volume, evidence of intraventricular extension, and Glasgow Coma Scale (GCS). The FOUR Score was then incorporated into the ICH Score as a substitute for the GCS (ICH Score FS ). The ability of the 2 scores to predict mortality at 1 month was then compared. In total, 274 patients met the inclusion criteria. The median age was 73 years (interquartile range 60-82) and 138 (50.4%) were male. Overall mortality at 1 month was 28.8% (n = 79). The area under the receiver operating characteristic curve was .91 for the ICH Score and .89 for the ICH Score FS . For ICH Scores of 1, 2, 3, 4, and 5, 1-month mortality was 4.2%, 29.9%, 62.5%, 95.0%, and 100%. In the ICH Score FS model, mortality was 10.7%, 26.5%, 64.5%, 88.9%, and 100% for scores of 1, 2, 3, 4, and 5, respectively. The ICH Score and the ICH Score FS predict 1-month mortality with comparable accuracy. As the FOUR Score provides additional clinical information regarding patient status, it may be a reasonable substitute for the GCS into the ICH Score. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  5. Cardio-ankle vascular index is associated with cardiovascular target organ damage and vascular structure and function in patients with diabetes or metabolic syndrome, LOD-DIABETES study: a case series report.

    Science.gov (United States)

    Gómez-Marcos, Manuel Ángel; Recio-Rodríguez, José Ignacio; Patino-Alonso, María Carmen; Agudo-Conde, Cristina; Gómez-Sánchez, Leticia; Gomez-Sanchez, Marta; Rodríguez-Sanchez, Emiliano; Maderuelo-Fernandez, Jose Angel; García-Ortiz, Luís

    2015-01-16

    The cardio ankle vascular index (CAVI) is a new index of the overall stiffness of the artery from the origin of the aorta to the ankle. This index can estimate the risk of atherosclerosis. We aimed to find the relationship between CAVI and target organ damage (TOD), vascular structure and function, and cardiovascular risk factors in Caucasian patients with type 2 diabetes mellitus or metabolic syndrome. We included 110 subjects from the LOD-Diabetes study, whose mean age was 61 ± 11 years, and 37.3% were women. Measurements of CAVI, brachial ankle pulse wave velocity (ba-PWV), and ankle brachial index (ABI) were taken using the VaSera device. Cardiovascular risk factors, renal function by creatinine, glomerular filtration rate, and albumin creatinine index were also obtained, as well as cardiac TOD with ECG and vascular TOD and carotid intima media thickness (IMT), carotid femoral PWV (cf-PWV), and the central and peripheral augmentation index (CAIx and PAIx). The Framingham-D'Agostino scale was used to measure cardiovascular risk. Mean CAVI was 8.7 ± 1.3. More than half (54%) of the participants showed one or more TOD (10% cardiac, 13% renal; 48% vascular), and 13% had ba-PWV ≥ 17.5 m/s. Patients with any TOD had the highest CAVI values: 1.15 (CI 95% 0.70 to 1.61, p < 0.001) and 1.14 (CI 95% 0.68 to 1.60, p < 0.001) when vascular TOD was presented, and 1.30 (CI 95% 0.51 to 2.10, p = 0.002) for the cardiac TOD. The CAVI values had a positive correlation with HbA1c and systolic and diastolic blood pressure, and a negative correlation with waist circumference and body mass index. The positive correlations of CAVI with IMT (β = 0.29; p < 0.01), cf-PWV (β = 0.83; p < 0.01), ba-PWV (β = 2.12; p < 0.01), CAIx (β = 3.42; p < 0.01), and PAIx (β = 5.05; p = 0.04) remained after adjustment for cardiovascular risk, body mass index, and antihypertensive, lipid-lowering, and antidiabetic drugs. The

  6. Hospital Value-Based Purchasing (HVBP) – Total Performance Score

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of hospitals participating in the Hospital VBP Program and their Clinical Process of Care domain scores, Patient Experience of Care dimension scores, and...

  7. Lower Bounds to the Reliabilities of Factor Score Estimators.

    Science.gov (United States)

    Hessen, David J

    2016-10-06

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.

  8. The Truth about Scores Children Achieve on Tests.

    Science.gov (United States)

    Brown, Jonathan R.

    1989-01-01

    The importance of using the standard error of measurement (SEm) in determining reliability in test scores is emphasized. The SEm is compared to the hypothetical true score for standardized tests, and procedures for calculation of the SEm are explained. (JDD)

  9. A locally adapted functional outcome measurement score for total ...

    African Journals Online (AJOL)

    ... in Europe or North America and seem not optimally suited for a general West ... We introduce a cross-cultural adaptation of the Lequesne index as a new score. ... Keywords: THR, Hip, Africa, Functional score, Hip replacement, Arthroscopy ...

  10. 48 CFR 1515.305-70 - Scoring plans.

    Science.gov (United States)

    2010-10-01

    ... METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 1515.305-70 Scoring plans. When... solicitation, e.g., other numeric, adjectival, color rating systems, etc. Scoring Plan Value Descriptive...

  11. Methods and statistics for combining motif match scores.

    Science.gov (United States)

    Bailey, T L; Gribskov, M

    1998-01-01

    Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.

  12. Modifying scoring system at South African University rugby level ...

    African Journals Online (AJOL)

    Success in rugby is measured by winning the game and in order to do so, teams need to score more points ... if modifying the scoring system at South African University rugby level changes the game dynamics. ... AJOL African Journals Online.

  13. The Incentive Effect of Scores: Randomized Evidence from Credit Committees

    OpenAIRE

    Daniel Paravisini; Antoinette Schoar

    2013-01-01

    We design a randomized controlled trial to evaluate the adoption of credit scoring with a bank that uses soft information in small businesses lending. We find that credit scores improve the productivity of credit committees, reduce managerial involvement in the loan approval process, and increase the profitability of lending. Credit committee members' effort and output also increase when they anticipate the score becoming available, indicating that scores improve incentives to use existing in...

  14. Proposal of a Mediterranean Diet Serving Score.

    Directory of Open Access Journals (Sweden)

    Celia Monteagudo

    Full Text Available Numerous studies have demonstrated a relationship between Mediterranean Diet (MD adherence and the prevention of cardiovascular diseases, cancer, and diabetes, etc. The study aim was to validate a novel instrument to measure MD adherence based on the consumption of food servings and food groups, and apply it in a female population from southern Spain and determining influential factors.The study included 1,155 women aged 12-83 yrs, classified as adolescents, adults, and over-60-yr-olds. All completed a validated semi-quantitative food frequency questionnaire (FFQ. The Mediterranean Dietary Serving Score (MDSS is based on the latest update of the Mediterranean Diet Pyramid, using the recommended consumption frequency of foods and food groups; the MDSS ranges from 0 to 24. The discriminative power or correct subject classification capacity of the MDSS was analyzed with the Receiver Operating Characteristic (ROC curve, using the MDS as reference method. Predictive factors for higher MDSS adherence were determined with a logistic regression model, adjusting for age. According to ROC curve analysis, MDSS evidenced a significant discriminative capacity between adherents and non-adherents to the MD pattern (optimal cutoff point=13.50; sensitivity=74%; specificity=48%. The mean MDSS was 12.45 (2.69 and was significantly higher with older age (p<0.001. Logistic regression analysis showed highest MD adherence by over 60-year-olds with low BMI and no habit of eating between meals.The MDSS is an updated, easy, valid, and accurate instrument to assess MD adherence based on the consumption of foods and food groups per meal, day, and week. It may be useful in future nutritional education programs to prevent the early onset of chronic non-transmittable diseases in younger populations.

  15. Gambling score in earthquake prediction analysis

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2011-03-01

    The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

  16. Trainee Occupational Therapists Scoring the Barthel ADL.

    Science.gov (United States)

    Martin, Elizabeth; Nugent, Chris; Bond, Raymond; Martin, Suzanne

    2015-09-01

    Within medical applications there are two main types of information design; paper-based and digital information [1]. As technology is constantly changing, information within healthcare management and delivery is continually being transitioned from traditional paper documents to digital and online resources. Activity of Daily Living (ADL) charts are still predominantly paper based and are therefore prone to "human error" [2]. In light of this, an investigation has taken place into the design for reducing the amount of human error, between a paper based ADL, specifically the Barthel Index, and the same ADL created digitally. The digital ADL was developed as an online platform as this offers the best method of data capture for a large group of participants all together [3]. The aim of the study was to evaluate the usability of the Barthel Index ADL in paper format and then reproduce the same ADL digitally. This paper presents the findings of a study involving 26 participants who were familiar with ADL charts, and used three scenarios requiring them to complete both a paper ADL and a digital ADL. An evaluation was undertaken to ascertain if there were any 'human errors' in completing the paper ADL and also to find similarities/differences through using the digital ADL. The results from the study indicated that 22/26 participants agreed that the digital ADL was better, if not the same as a paper based ADL. Further results indicated that participants rate highly the added benefit of the digital ADL being easy to use and also that calculation of assessment scores were performed automatically. Statistically the digital BI offered a 100 % correction rate in the total calculation, in comparison to the paper based BI where it is more common for users to make mathematical calculation errors. Therefore in order to minimise handwriting and calculation errors, the digital BI proved superior than the traditional paper based method.

  17. Dose Uniformity of Scored and Unscored Tablets: Application of the FDA Tablet Scoring Guidance for Industry.

    Science.gov (United States)

    Ciavarella, Anthony B; Khan, Mansoor A; Gupta, Abhay; Faustino, Patrick J

    This U.S. Food and Drug Administration (FDA) laboratory study examines the impact of tablet splitting, the effect of tablet splitters, and the presence of a tablet score on the dose uniformity of two model drugs. Whole tablets were purchased from five manufacturers for amlodipine and six for gabapentin. Two splitters were used for each drug product, and the gabapentin tablets were also split by hand. Whole and split amlodipine tablets were tested for content uniformity following the general chapter of the United States Pharmacopeia (USP) Uniformity of Dosage Units , which is a requirement of the new FDA Guidance for Industry on tablet scoring. The USP weight variation method was used for gabapentin split tablets based on the recommendation of the guidance. All whole tablets met the USP acceptance criteria for the Uniformity of Dosage Units. Variation in whole tablet content ranged from 0.5 to 2.1 standard deviation (SD) of the percent label claim. Splitting the unscored amlodipine tablets resulted in a significant increase in dose variability of 6.5-25.4 SD when compared to whole tablets. Split tablets from all amlodipine drug products did not meet the USP acceptance criteria for content uniformity. Variation in the weight for gabapentin split tablets was greater than the whole tablets, ranging from 1.3 to 9.3 SD. All fully scored gabapentin products met the USP acceptance criteria for weight variation. Size, shape, and the presence or absence of a tablet score can affect the content uniformity and weight variation of amlodipine and gabapentin tablets. Tablet splitting produced higher variability. Differences in dose variability and fragmentation were observed between tablet splitters and hand splitting. These results are consistent with the FDA's concerns that tablet splitting can have an effect on the amount of drug present in a split tablet and available for absorption. Tablet splitting has become a very common practice in the United States and throughout the

  18. Comparing the Scoring of Human Decomposition from Digital Images to Scoring Using On-site Observations.

    Science.gov (United States)

    Dabbs, Gretchen R; Bytheway, Joan A; Connor, Melissa

    2017-09-01

    When in forensic casework or empirical research in-person assessment of human decomposition is not possible, the sensible substitution is color photographic images. To date, no research has confirmed the utility of color photographic images as a proxy for in situ observation of the level of decomposition. Sixteen observers scored photographs of 13 human cadavers in varying decomposition stages (PMI 2-186 days) using the Total Body Score system (total n = 929 observations). The on-site TBS was compared with recorded observations from digital color images using a paired samples t-test. The average difference between on-site and photographic observations was -0.20 (t = -1.679, df = 928, p = 0.094). Individually, only two observers, both students with human decomposition based on digital images can be substituted for assessments based on observation of the corpse in situ, when necessary. © 2017 American Academy of Forensic Sciences.

  19. Nutech functional score: A novel scoring system to assess spinal cord injury patients.

    Science.gov (United States)

    Shroff, Geeta; Barthakur, Jitendra Kumar

    2017-06-26

    To develop a new scoring system, nutech functional scores (NFS) for assessing the patients with spinal cord injury (SCI). The conventional scale, American Spinal Injury Association's (ASIA) impairment scale is a measure which precisely describes the severity of the SCI. However, it has various limitations which lead to incomplete assessment of SCI patients. We have developed a 63 point scoring system, i . e ., NFS for patients suffering with SCI. A list of symptoms either common or rare that were found to be associated with SCI was recorded for each patient. On the basis of these lists, we have developed NFS. These lists served as a base to prepare NFS, a 63 point positional (each symptom is sub-graded and get points based on position) and directional (moves in direction BAD → GOOD) scoring system. For non-progressive diseases, 1, 2, 3, 4, 5 denote worst, bad, moderate, good and best (normal), respectively. NFS for SCI has been divided into different groups based on the affected part of the body being assessed, i . e ., motor assessment (shoulders, elbow, wrist, fingers-grasp, fingers-release, hip, knee, ankle and toe), sensory assessment, autonomic assessment, bed sore assessment and general assessment. As probability based studies required a range of (-1, 1) or at least the range of (0, 1) to be useful for real world analysis, the grades were converted to respective numeric values. NFS can be considered as a unique tool to assess the improvement in patients with SCI as it overcomes the limitations of ASIA impairment scale.

  20. Severity scoring in the critically ill: part 2: maximizing value from outcome prediction scoring systems.

    Science.gov (United States)

    Breslow, Michael J; Badawi, Omar

    2012-02-01

    Part 2 of this review of ICU scoring systems examines how scoring system data should be used to assess ICU performance. There often are two different consumers of these data: lCU clinicians and quality leaders who seek to identify opportunities to improve quality of care and operational efficiency, and regulators, payors, and consumers who want to compare performance across facilities. The former need to know how to garner maximal insight into their care practices; this includes understanding how length of stay (LOS) relates to quality, analyzing the behavior of different subpopulations, and following trends over time. Segregating patients into low-, medium-, and high-risk populations is especially helpful, because care issues and outcomes may differ across this severity continuum. Also, LOS behaves paradoxically in high-risk patients (survivors often have longer LOS than nonsurvivors); failure to examine this subgroup separately can penalize ICUs with superior outcomes. Consumers of benchmarking data often focus on a single score, the standardized mortality ratio (SMR). However, simple SMRs are disproportionately affected by outcomes in high-risk patients, and differences in population composition, even when performance is otherwise identical, can result in different SMRs. Future benchmarking must incorporate strategies to adjust for differences in population composition and report performance separately for low-, medium- and high-acuity patients. Moreover, because many ICUs lack the resources to care for high-acuity patients (predicted mortality >50%), decisions about where patients should receive care must consider both ICU performance scores and their capacity to care for different types of patients.

  1. Improving personality facet scores with multidimensional computer adaptive testing

    DEFF Research Database (Denmark)

    Makransky, Guido; Mortensen, Erik Lykke; Glas, Cees A W

    2013-01-01

    personality tests contain many highly correlated facets. This article investigates the possibility of increasing the precision of the NEO PI-R facet scores by scoring items with multidimensional item response theory and by efficiently administering and scoring items with multidimensional computer adaptive...

  2. Classifying snakebite in South Africa: Validating a scoring system ...

    African Journals Online (AJOL)

    Factors predictive of ATI and the optimal cut-off score for predicting an ATI were identified. These factors were then used to develop a standard scoring system. The score was then tested prospectively for accuracy in a new validation cohort consisting of 100 patients admitted for snakebite to our unit from 1 December 2014 to ...

  3. The comparison of modified early warning score and Glasgow coma ...

    African Journals Online (AJOL)

    Introduction: The purpose of this study is to assess and compare the discriminatory ability of the Glasgow coma scale (GCS)‑age‑systolic blood pressure (GAP) score and modified early warning scoring system (mEWS) score for 4‑week mortality, for the patients being in the triage category 1 and 2 who refer to Emergency ...

  4. a locally adapted functional outcome measurement score for total

    African Journals Online (AJOL)

    Results and success of total hip arthroplasty are often measured using a functional outcome scoring system. Most current scores were developed in Europe and. North America (1-3). During the evaluation of a Total. Hip Replacement (THR) project in Ouagadougou,. Burkina Faso (4) it was felt that these scores were not.

  5. Family Functioning and Child Psychopathology: Individual Versus Composite Family Scores.

    Science.gov (United States)

    Mathijssen, Jolanda J. J. P.; Koot, Hans M.; Verhulst, Frank C.; De Bruyn, Eric E. J.; Oud, Johan H. L.

    1997-01-01

    Examines the relationship of individual family members' perceptions and family mean and discrepancy scores of cohesion and adaptability with child psychopathology in a sample of 138 families. Results indicate that family mean scores, contrary to family discrepancy scores, explain more of the variance in parent-reported child psychopathology than…

  6. LCA single score analysis of man-made cellulose fibres

    NARCIS (Netherlands)

    Shen, L.; Patel, M.K.

    2010-01-01

    In this study, the LCA report “Life Cycle assessment of man-made cellulose fibres” [3] is extended to the single score analysis in order to provide an additional basis for decision making. The single score analysis covers 9 to 11 environmental impact categories. Three single score methods (Single

  7. A Framework for Evaluation and Use of Automated Scoring

    Science.gov (United States)

    Williamson, David M.; Xi, Xiaoming; Breyer, F. Jay

    2012-01-01

    A framework for evaluation and use of automated scoring of constructed-response tasks is provided that entails both evaluation of automated scoring as well as guidelines for implementation and maintenance in the context of constantly evolving technologies. Consideration of validity issues and challenges associated with automated scoring are…

  8. Personality and Examination Score Correlates of Abnormal Psychology Course Ratings.

    Science.gov (United States)

    Pauker, Jerome D.

    The relationship between the ratings students assigned to an evening undergraduate abnormal psychology class and their scores on objective personality tests and course examinations was investigated. Students (N=70) completed the MMPI and made global ratings of the course; these scores were correlated separately by sex with the T scores of 13 MMPI…

  9. Confidence Scoring of Speaking Performance: How Does Fuzziness become Exact?

    Science.gov (United States)

    Jin, Tan; Mak, Barley; Zhou, Pei

    2012-01-01

    The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…

  10. Clinical outcome scoring of intra-articular calcaneal fractures

    NARCIS (Netherlands)

    Schepers, Tim; Heetveld, Martin J.; Mulder, Paul G. H.; Patka, Peter

    2008-01-01

    Outcome reporting of intra-articular calcaneal fractures is inconsistent. This study aimed to identify the most cited outcome scores in the literature and to analyze their reliability and validity. A systematic literature search identified 34 different outcome scores. The most cited outcome score

  11. Extended score interval in the assessment of basic surgical skills.

    Science.gov (United States)

    Acosta, Stefan; Sevonius, Dan; Beckman, Anders

    2015-01-01

    The Basic Surgical Skills course uses an assessment score interval of 0-3. An extended score interval, 1-6, was proposed by the Swedish steering committee of the course. The aim of this study was to analyze the trainee scores in the current 0-3 scored version compared to a proposed 1-6 scored version. Sixteen participants, seven females and nine males, were evaluated in the current and proposed assessment forms by instructors, observers, and learners themselves during the first and second day. In each assessment form, 17 tasks were assessed. The inter-rater reliability between the current and the proposed score sheets were evaluated with intraclass correlation (ICC) with 95% confidence intervals (CI). The distribution of scores for 'knot tying' at the last time point and 'bowel anastomosis side to side' given by the instructors in the current assessment form showed that the highest score was given in 31 and 62%, respectively. No ceiling effects were found in the proposed assessment form. The overall ICC between the current and proposed score sheets after assessment by the instructors increased from 0.38 (95% CI 0.77-0.78) on Day 1 to 0.83 (95% CI 0.51-0.94) on Day 2. A clear ceiling effect of scores was demonstrated in the current assessment form, questioning its validity. The proposed score sheet provides more accurate scores and seems to be a better feedback instrument for learning technical surgical skills in the Basic Surgical Skills course.

  12. Automated Scoring of L2 Spoken English with Random Forests

    Science.gov (United States)

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  13. Score Gains on g-loaded Tests: No g

    NARCIS (Netherlands)

    te Nijenhuis, J.; van Vianen, A.E.M.; van der Flier, H.

    2007-01-01

    IQ scores provide the best general predictor of success in education, job training, and work. However, there are many ways in which IQ scores can be increased, for instance by means of retesting or participation in learning potential training programs. What is the nature of these score gains? Jensen

  14. Validation of the Simplified Motor Score in patients with traumatic ...

    African Journals Online (AJOL)

    Background. This study used data from a large prospectively entered database to assess the efficacy of the motor score (M score) component of the Glasgow Coma Scale (GCS) and the Simplified Motor Score (SMS) in predicting overall outcome in patients with traumatic brain injury (TBI). Objective. To safely and reliably ...

  15. Comparing TACOM scores with subjective workload scores measured by NASA-TLX technique

    International Nuclear Information System (INIS)

    Park, Jin Kyun; Jung, Won Dea

    2006-01-01

    It is a well-known fact that a large portion of human performance related problems was attributed to the complexity of tasks. Therefore, managing the complexity of tasks is a prerequisite for safety-critical systems such as nuclear power plants (NPPs), because the consequence of a degraded human performance could be more severe than in other systems. From this concern, it is necessary to quantify the complexity of emergency tasks that are stipulated in procedures, because most tasks of NPPs have been specified in the form of procedures. For this reason, Park et al. developed a task complexity measure called TACOM. In this study, in order to confirm the validity of the TACOM measure, subjective workload scores that were measured by the NASA-TLX technique were compared with the associated TACOM scores. To do this, 23 emergency tasks of the reference NPPs were selected, and then subjective workload scores for these emergency tasks were quantified by 18 operators who had a sufficient knowledge about emergency operations

  16. Comparing TACOM scores with subjective workload scores measured by NASA-TLX technique

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jin Kyun; Jung, Won Dea [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    It is a well-known fact that a large portion of human performance related problems was attributed to the complexity of tasks. Therefore, managing the complexity of tasks is a prerequisite for safety-critical systems such as nuclear power plants (NPPs), because the consequence of a degraded human performance could be more severe than in other systems. From this concern, it is necessary to quantify the complexity of emergency tasks that are stipulated in procedures, because most tasks of NPPs have been specified in the form of procedures. For this reason, Park et al. developed a task complexity measure called TACOM. In this study, in order to confirm the validity of the TACOM measure, subjective workload scores that were measured by the NASA-TLX technique were compared with the associated TACOM scores. To do this, 23 emergency tasks of the reference NPPs were selected, and then subjective workload scores for these emergency tasks were quantified by 18 operators who had a sufficient knowledge about emergency operations.

  17. Scoring the full extent of periodontal disease in the dog: development of a total mouth periodontal score (TMPS) system.

    Science.gov (United States)

    Harvey, Colin E; Laster, Larry; Shofer, Frances; Miller, Bonnie

    2008-09-01

    The development of a total mouth periodontal scoring system is described. This system uses methods to score the full extent of gingivitis and periodontitis of all tooth surfaces, weighted by size of teeth, and adjusted by size of dog.

  18. A novel D458V mutation in the SANS PDZ binding motif causes atypical Usher syndrome.

    NARCIS (Netherlands)

    Kalay, E.; Brouwer, A.P.M. de; Caylan, R.; Nabuurs, S.B.; Wollnik, B.; Karaguzel, A.; Heister, J.G.A.M.; Erdol, H.; Cremers, F.P.M.; Cremers, C.W.R.J.; Brunner, H.G.; Kremer, J.M.J.

    2005-01-01

    Homozygosity mapping and linkage analysis in a Turkish family with autosomal recessive prelingual sensorineural hearing loss revealed a 15-cM critical region at 17q25.1-25.3 flanked by the polymorphic markers D17S1807 and D17S1806. The maximum two-point lod score was 4.07 at theta=0.0 for the marker

  19. Dual-energy X-ray absorptiometry diagnostic discordance between Z-scores and T-scores in young adults.

    LENUS (Irish Health Repository)

    Carey, John J

    2009-01-01

    Diagnostic criteria for postmenopausal osteoporosis using central dual-energy X-ray absorptiometry (DXA) T-scores have been widely accepted. The validity of these criteria for other populations, including premenopausal women and young men, has not been established. The International Society for Clinical Densitometry (ISCD) recommends using DXA Z-scores, not T-scores, for diagnosis in premenopausal women and men aged 20-49 yr, though studies supporting this position have not been published. We examined diagnostic agreement between DXA-generated T-scores and Z-scores in a cohort of men and women aged 20-49 yr, using 1994 World Health Organization and 2005 ISCD DXA criteria. Four thousand two hundred and seventy-five unique subjects were available for analysis. The agreement between DXA T-scores and Z-scores was moderate (Cohen\\'s kappa: 0.53-0.75). The use of Z-scores resulted in significantly fewer (McNemar\\'s p<0.001) subjects diagnosed with "osteopenia," "low bone mass for age," or "osteoporosis." Thirty-nine percent of Hologic (Hologic, Inc., Bedford, MA) subjects and 30% of Lunar (GE Lunar, GE Madison, WI) subjects diagnosed with "osteoporosis" by T-score were reclassified as either "normal" or "osteopenia" when their Z-score was used. Substitution of DXA Z-scores for T-scores results in significant diagnostic disagreement and significantly fewer persons being diagnosed with low bone mineral density.

  20. Examining the reliability of ADAS-Cog change scores.

    Science.gov (United States)

    Grochowalski, Joseph H; Liu, Ying; Siedlecki, Karen L

    2016-09-01

    The purpose of this study was to estimate and examine ways to improve the reliability of change scores on the Alzheimer's Disease Assessment Scale, Cognitive Subtest (ADAS-Cog). The sample, provided by the Alzheimer's Disease Neuroimaging Initiative, included individuals with Alzheimer's disease (AD) (n = 153) and individuals with mild cognitive impairment (MCI) (n = 352). All participants were administered the ADAS-Cog at baseline and 1 year, and change scores were calculated as the difference in scores over the 1-year period. Three types of change score reliabilities were estimated using multivariate generalizability. Two methods to increase change score reliability were evaluated: reweighting the subtests of the scale and adding more subtests. Reliability of ADAS-Cog change scores over 1 year was low for both the AD sample (ranging from .53 to .64) and the MCI sample (.39 to .61). Reweighting the change scores from the AD sample improved reliability (.68 to .76), but lengthening provided no useful improvement for either sample. The MCI change scores had low reliability, even with reweighting and adding additional subtests. The ADAS-Cog scores had low reliability for measuring change. Researchers using the ADAS-Cog should estimate and report reliability for their use of the change scores. The ADAS-Cog change scores are not recommended for assessment of meaningful clinical change.